rkagerer a day ago

In a sense, Adrian Thompson kicked this off in the 90's when he applied an evolutionary algorithm to FPGA hardware. Using a "survival of the fittest" approach, he taught a board to discern the difference between a 1kHz and 10KHz tone.

The final generation of the circuit was more compact than anything a human engineer would ever come up with (reducible to a mere 37 logic gates), and utilized all kinds of physical nuances specific to the chip it evolved on - including feedback loops, EMI effects between unconnected logic units, and (if I recall) operating transistors outside their saturation region.

Article: https://www.damninteresting.com/on-the-origin-of-circuits/

Paper: https://www.researchgate.net/publication/2737441_An_Evolved_...

Reddit: https://www.reddit.com/r/MachineLearning/comments/2t5ozk/wha...

  • dang 19 hours ago

    Related. Others?

    The origin of circuits (2007) - https://news.ycombinator.com/item?id=18099226 - Sept 2018 (25 comments)

    On the Origin of Circuits: GA Exploits FPGA Batch to Solve Problem - https://news.ycombinator.com/item?id=17134600 - May 2018 (1 comment)

    On the Origin of Circuits (2007) - https://news.ycombinator.com/item?id=9885558 - July 2015 (12 comments)

    An evolved circuit, intrinsic in silicon, entwined with physics (1996) - https://news.ycombinator.com/item?id=8923902 - Jan 2015 (1 comment)

    On the Origin of Circuits (2007) - https://news.ycombinator.com/item?id=8890167 - Jan 2015 (1 comment)

    That's not a lot of discussion—we should have another thread about this sometime. If you want to submit it in (say) a week or two, email hn@ycombinator.com and we'll put it in the second-chance pool (https://news.ycombinator.com/pool, explained at https://news.ycombinator.com/item?id=26998308), so it will get a random placement on HN's front page.

    • trogdor 2 hours ago

      If you’re up for sharing, I’m curious to know approximately how many hours each week you spend working on HN. It seems like it would be an enormous amount of time, but I’m just guessing.

    • Retr0id 43 minutes ago

      Did something funky happen to the timestamps in this thread? I could've sworn I was reading it last night (~12h ago)

      • dang 36 minutes ago

        It looks like we put the thread in HN's second-chance pool (https://news.ycombinator.com/item?id=26998308), so it got re-upped and given a random slot on the frontpage.

        The relativized timestamps are an artifact of the re-upping system. There are past explanations here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

        Sorry for the confusion! I know it's weird but the alternative turns out to be even more confusing and we've never figured out how to square that circle.

      • skissane 35 minutes ago

        I think dang did something manual to push it back to the frontpage, and that reset the timestamps on everyone’s existing comments…

        There is a comment here by me which says “2 hours ago”, I swear I wrote it longer ago than that - indeed, my threads page still says I wrote it 20 hours ago, so it is like part of the code knows when I really wrote it, another part now thinks I wrote it 18 hours later than I did…

        • dang 32 minutes ago

          Yes, the relativized timestamps only show on /news (i.e. the frontpage) and /item pages. You can always see the original timestamps on other pages, like /submitted, /from, or (as you say) /threads.

          Edit: I checked the code and the actual list is:

            '(news item reply show ask active best over classic).
  • quanto 21 hours ago

    Fascinating paper. Thanks for the ref.

    Operating transistors outside the linear region (the saturated "on") on a billion+ scale is something that we as engineers and physicists haven't quite figured out, and I am hoping that this changes in future, especially with the advent of analog neuromorphic computing. The quadratic region (before the "on") is far more energy efficient and the non-linearity could actually help with computing, not unlike the activation function in an NN.

    Of course, the modeling the nonlinear behavior is difficult. My prof would say for every coefficient in SPICE's transistor models, someone dedicated his entire PhD (and there are a lot of these coefficients!).

    I haven't been in touch with the field since I moved up the stack (numerical analysis/ML) I would love to learn more if there has been recent progress in this field.

    • Aurornis 3 hours ago

      The machine learning model didn’t discover something that humans didn’t know about. It abused some functions specific to the chip that could not be repeated in production or even on other chips or other configurations of the same chip.

      That is a common problem with fully free form machine learning solutions: They can stumble upon something that technically works in their training set, but any human who understood the full system would never actually use due to the other problems associated with it.

      > The quadratic region (before the "on") is far more energy efficient

      Take a look at the structure of something like CMOS and you’ll see why running transistors in anything other than “on” or “off” is definitely not energy efficient. In fact, the transitions are where the energy usage largely goes. We try to get through that transition period as rapidly as possible because minimal current flows when the transistors reach the on or off state.

      There are other logic arrangements, but I don’t understand what you’re getting at by suggesting circuits would be more efficient. Are you referring to the reduced gate charge?

      • Thorondor 3 hours ago

        > Take a look at the structure of something like CMOS and you’ll see why running transistors in anything other than “on” or “off” is definitely not energy efficient. In fact, the transitions are where the energy usage largely goes. We try to get through that transition period as rapidly as possible because minimal current flows when the transistors reach the on or off state.

        Sounds like you might be thinking of power electronic circuits rather than CMOS. In a CMOS logic circuit, current does not flow from Vdd to ground as long as either the p-type or the n-type transistor is fully switched off. The circuit under discussion was operated in subthreshold mode, in which one transistor in a complementary pair is partially switched on and the other is fully switched off. So it still only uses power during transitions, and the energy consumed in each transition is lower than in the normal mode because less voltage is switched at the transistor gate.

        • Aurornis 2 hours ago

          > In a CMOS logic circuit, current does not flow from Vdd to ground as long as either the p-type or the n-type transistor is fully switched off.

          Right, but how do you get the transistor fully switched off? Think about what happens during the time when it’s transitioning between on and off.

          You can run the transistors from the previous stage in a different part of the curve, but that’s not an isolated effect. Everything that impacts switching speed and reduces the current flowing to turn the next gate on or off will also impact power consumption.

          There might be some theoretical optimization where the transistors are driven differently, but at what cost of extra silicon and how delicate is the balance between squeezing a little more efficiency and operating too close to the point where minor manufacturing changes can become outsized problems?

      • adrian_b 3 hours ago

        The previous poster was probably thinking about very low power analog circuits or extremely slow digital circuits (like those used in wrist watches), where the on-state of the MOS transistors is in the subthreshold conduction region (while the off state is the same off state as in any other CMOS circuits, ensuring a static power consumption determined only by leakage).

        Such circuits are useful for something powered by a battery that must have a lifetime measured in years, but they cannot operate at high speeds.

      • nextaccountic 3 hours ago

        In other words, optimization algorithms in general are prone to overfitting. Fortunately there are techniques to deal with that. Thing is, once you find a solution that generalize better to different chips, it probably won't be as small as the solution found.

      • ajmurmann 2 hours ago

        Seems like this overfitting problem could have been trivially fixed by running it on more than one chip, no?

        • Aurornis 2 hours ago

          Unfortunately not. This is analogous to writing a C program that relied on undefined behavior on the specific architecture and CPU of your developer machine. It’s not portable.

          The behavior could change from one manufacturing run to another. The behavior could disappear altogether in a future revision of the chip.

          The behavior could even disappear if you change some other part of the design that then relocated the logic to a different set of cells on the chip. This was noted in the experiment where certain behavior depended on logic being placed in a specific location, generating certain timings.

          If you rely on anything other than the behavior defined by the specifications, you’re at risk of it breaking. This is a problem with arriving at empirical solutions via guess and check, too.

          Ideally you’d do everything in simulation rather than on-chip where possible. The simulator would only function in ways supported by the specifications of the chip without allowing undefined behavior.

    • nyeah 3 hours ago

      I'm having trouble understanding. Chips with very high transistor counts tend to use saturation/turn-off almost exclusively. Very little is done in the linear region because it burns a lot of power and it's less predictable.

    • shermantanktop 2 hours ago

      > Operating transistors outside the linear region (the saturated "on")

      Do fuzz pedals count?

      To be fair, we know they work and basically how they work, but the sonic nuances can be very hard to predict from a schematic.

    • ImHereToVote 21 hours ago

      I believe neuromorphic spiking hardware will be the step to truly revolutionize the field of anthropod contagion issues.

      • trainsarebetter 19 hours ago

        Can’t tell if this is a joke or not

        • burnished 17 hours ago

          I came in already knowing what neuromorphic hardware is and I'm also unsure

          • fouc 15 hours ago

            joke I think, anthropod is probably another way of saying bugs/ants haha

            • Sharlin 3 hours ago

              *arthropod, as in "joint(ed) leg" (cf. arthritis), GP misspelled it. "Anthropod" would mean something like "human leg".

            • burnished 15 hours ago

              Oh christ you're right, they were actually being really funny. I was being super literal and imagined them being very excited about futuristic advances in giant isopod diagnosis and care

            • ImHereToVote 10 hours ago

              Yeah, anthropic bugs. The planet is infested with them.

      • igleria 3 hours ago

        at last, something possibly more buggy than vibe coding!

      • gtirloni 2 hours ago

        My thoughts, exactly.

  • viccis 20 hours ago

    I really wish I still had the link, but there used to be a website that listed a bunch of times in which machine learning was used (mostly via reinforcement learning) to teach a computer how to play a video game and it ended up using perverse strategies that no human would do. Like exploiting weird glitches (https://www.youtube.com/watch?v=meE5aaRJ0Zs shows this with Q*bert)

    Closest I've found to the old list I used to go to is this: https://heystacks.com/doc/186/specification-gaming-examples-...

    • matsemann 3 hours ago

      In my thesis many years ago [0] I used EAs to build bicycle wheels. They were so annoyingly good at exploiting whatever idiosyncrasies in my wheel-simulator. Like, the first iterations of my simulator it managed to evolve wheels that would slowly oscillate due to floating point instability or something, and when applied forces to it would increase and increase until the whole simulator exploded and the recorded forces were all over the place, of course then out-competing any wheel in at least some objective dimension.

      After fixing those bugs, I mostly struggled with it taunting me. Like building a wheel with all the spokes going from the hub and straight up to the rim. It of course would break down when rolling, but on the objective of "how much load can it handle on the bike" it again out-competed every other wheel, and thus was at the pareto-front of that objective and kept showing up through all my tests. Hated that guy, heh. I later changed it to test all wheels in at least 4 orientations, it would then still taunt me with wheels like (c) in this figure[1], exploiting that.

      [0]: https://news.ycombinator.com/item?id=10410813 [1]: https://imgur.com/a/LsONTGc

    • y33t 18 hours ago

      My favorite example was a game of pong with the goal of staying alive as long as possible. One ML algo just paused the game and left it like that.

      • chefandy 3 hours ago

        My favorite was the ML learning how to optimally make the lowest-impact landing in a flight simulator— it discovered that it could wrap the impact float value if the impact was high enough so instead of figuring out the optimal landing, it started figuring out the optimal path to the highest-impact crashes.

        • hammock 2 hours ago

          This comment ought to be higher up. Such a perfect summary of what I have struggled to understand, which is the “danger” of AI once we allow it to control things

          And yes you can fix the bug but the bike wheel guy shows you there will always be another bug. We need a paper/proof that invents a process that can put an AI-supported (non human intervention) finite cap or limiter or something on the possible bug surface

          • themaninthedark 11 minutes ago

            There is an apocryphal story about AI:

            Conglomerate developed an AI and vision system that you could hook up to your Anti-aircraft systems to eliminate any chance of friendly fire. DARPA and the Pentagon went wild, pushing the system through test so they could get to the live demonstration.

            They hook up a live and load up dummy rounds system, fly a few friendly planes over and everything looks good however when they fly a captured Mig-21 over the system fails to respond. The Brass is upset and the engineers are all scratching their heads trying to figure out what is going on but as the sun sets the system lights up, trying to shoot down anything in the sky.

            They quickly shut down the system and do a postmortem, in the review they find that all the training data for friendly planes are perfect weather, blue sky overflights and all the training data for the enemy are nighttime/ low light pictures. The AI determined that anything fling during the day is friendly and anything at night is terminate with extreme prejudiced.

          • aurbano 2 hours ago

            Is AI the danger, or is our inability to simplify a problem down to an objective function the problem?

            If anything, AI could help by "understanding" the real objective, so we don't have to code these simplified goals that ML models end up gaming no?

            • TeMPOraL an hour ago

              Simplification is the problem here, arguably. Even a simple-sounding objective (say, a bicycle wheel that holds load the best) has at least one implicit assumption - it will be handled and used in the real world. Which means it'll be subject of sloppy handling and thermal spikes and weather and abuse and all kinds of things that are not just meeting the goal. Any of those cheesy AI designs, if you were to 3D-print/replicate them, they'd fall apart as you picked them up. So the problem seems to be, ML algorithm is getting too simple goal function - one lacking the "used in the real world" part.

              I feel that a good first step would be to introduce some kind of random jitter into the simulation. Like, in case of the wheels, introduce road bumps, and perhaps start each run by simulating dropping the wheel from a short distance. This should quickly weed out "too clever" solutions - as long as the jitter is random enough, so RL won't pick up on it and start to exploit its non-randomness.

              Speaking of road bumps: there is no such thing in reality as a perfectly flat road; if the wheel simulator is just rolling wheels on mathematically perfect roads, that's a big deviation from reality - precisely the kind that allows for "hacky" solutions that are not possible in the real world.

            • tbrake an hour ago

              How would more AI help? "given this goal with these parameters, figure out if another AI will ever game it into eventual thermonuclear war. "

              Feels halting problem-esque.

              • aurbano 41 minutes ago

                My point was that instead of blaming ML - or optimisation tools really - for gaming objective functions and coming up with non-solutions that do maximise reward, AI could instead be used to measure the reward/fitness of the solution.

                So to the OP's example "optimise a bike wheel", technically an AI should be able to understand whether a proposed wheel is good or not, in a similar way to a human.

            • hammock 2 hours ago

              >simplify a problem down to an objective function

              Yes, I have an intuition that this is NP hard though

        • tlb 2 hours ago

          All these claims are like "programming is impossible because I typed in a program and it had a bug". Yes, everyone's first attempt at a reward function is hackable. So you have to tighten up the reward function to exclude solutions you don't want.

        • 1234letshaveatw 2 hours ago

          Ummm, I'm going to hold off on that FSD subscription for a bit longer...

      • voidUpdate 3 hours ago

        Is that Learnfun/Playfun that tom7 made? That one paused just before losing on tetris and left it like that, because any other input would make it lose

        • y33t 3 hours ago

          No I want to say this was ~10 years ago. Happened to a university researcher IIRC.

    • robertjpayne 20 hours ago

      Make no mistake most humans will exploit any glitches and bugs they can find for personal advantage in game. It’s just machines can exploit timing bugs better.

      • Muromec 3 hours ago

        Some people are able to do frame perfect inputs semi consistently from what I understand. I don’t understand how, as my own performance is around hitting 100ms window once, every other time

        • TeMPOraL an hour ago

          Maybe they have better equipment?

          If you're using a typical PC (or $deity forbid, a phone) with a typical consumer OS, there's several sources of variability between your controller and the visual feedback you receive from the game, each of which could randomly introduce delays on the order of milliseconds or more. That "randomly" here is the key phrase - lag itself is not a problem, the variability is.

          • Muromec an hour ago

            Better equipment or not, frame-perfect input is just hard to do and I'm impressed with people being able to do it.

    • szvsw 3 hours ago

      There’s a few very cool examples where someone recently used RL to solve trackmania, and ends up having to add all sorts of constraints/penalties to prevent extremely strange exploits/glitches that are discovered IIRC… been a while since I watched.

      https://youtu.be/Dw3BZ6O_8LY?si=VUcJa_hfCxjZhhfR

      https://youtu.be/NUl6QikjR04?si=DpZ-iqVdqjzahkwy

      • hnuser123456 3 hours ago

        Well, in the case of the latter, there was a vaguely known glitch for driving on the nose that allowed for better speeds than possible on 4 wheels, but it would be completely uncontrollable to a human. He figured out how to break the problem down into steps that the NN could gradually learn piecewise, until he had cars racing around tracks while balancing on their nose.

        It turned out to have learned to keep the car spinning on its nose for stability, and timing inputs to upset the spinning balance at the right moment to touch the ground with the tire to shoot off in a desired direction.

        I think the overall lesson is that, to make useful machine learning, we must break our problems down into pieces small enough that an algorithm can truly "build up skills" and learn naturally, under the correct guidance.

    • elzbardico 2 hours ago

      For the model, the weird glitches are just another element of the game. As they can't reason, have no theory of world or even any real knowledge of what is doing, the model don't have the prior assumptions a human would have about how the game is supposed to be played.

      If you think about it, even using the term "perverse" is a result of us antropomorphizing any object in the universe that does anything we believe is on the realm of things humans do.

    • rdlw an hour ago

      Not quite what you're describing, but no one has yet linked the classic Tom7 series where he applies deep learning to classic NES games: https://youtu.be/xOCurBYI_gY

    • Muromec 3 hours ago

      > using perverse strategies that no human would do

      Of course we do use perverse strategies and glitches in adversarial multiplayer all the time.

      Case in point chainsaw glitch, tumblebuffs, early hits and perfect blocks in Elden Ring

    • genewitch 19 hours ago

      on youtube, codebullet remakes games so that he can try different AI techniques to beat them.

  • mk_stjames 15 hours ago

    I've referenced this paper many times here; it's easily in my top 10 of papers I've ever read. It's one of those ones that, if you go into it blind, you have several "Oh no f'king way" moments.

    The interesting thing to me now is... that research is very much a product of the right time. The specific Xilinx FPGA he was using was incredibly simple by today's standards and this is actually what allowed it to work so well. It was 5v, and from what I remember, the binary bitstream to program it was either completely documented, or he was able to easily generate the bitstreams by studying the output of the Xilinx router- in that era Xilinx had a manual PnR tool where you could physically draw how the blocks connected by hand if you wanted. All the blocks were the same and laid out physically how you'd expect. And the important part is that you couldn't brick the chip with an invalid binary bitstream programming. So if a generation made something wonky, it still configured the chip and ran it, no harm.

    Most all, if not all modern FPGAs just cannot be programmed like this anymore. Just randomly mutating a bitstream would, at best, make an invalid binary that the chip just won't burn. Or, at worst, brick it.

  • breatheoften 20 hours ago

    I remember this paper being discussed in the novel "Science of Discworld" -- a super interesting book involving collaboration between a fiction author and some real world scientists -- where the fictional characters in the novel discover our universe and its rules ... I always thought there was some deep insight to be had about the universe within this paper. Now moreso I think the unexpectedness says something instead about the nature of engineering and control and human mechanisms for understanding these sorts of systems ... -- sort of by definition human engineering relies on linearized approximations to characterize the effects being manipulated -- so something which operates in modes far outside those models is basically inscrutable. I think that's kind of expected but the results still provoke the fascination to ponder the solutions super human engineering methods might yet find with the modern technical substrates.

    • thirdtruck 2 hours ago

      Xe highly recommend the series! Xe keep going back to them for bedtime audio book listening. Chapters alternate between fact and fiction and the mix of intriguing narrative and drier but compelling academic talk help put xir otherwise overly busy mind to rest. In fact, xe bought softcover copies of two of them just last week.

      The science is no longer cutting edge (some are over twenty years old) but the deeper principles hold and Discworld makes for an excellent foil to our own Roundworld, just as Sir Pratchett intended.

      Indeed, the series says more about us as humans and our relationship to the universe than the universe itself and xe love that.

  • Terr_ 21 hours ago

    IIRC the flip-side was that it was hideously specific to a particular model and batch of hardware, because it relied on something that would otherwise be considered a manufacturing flaw.

    • phire 17 hours ago

      Not even one batch. It was specific to that exact one chip it was evolved on. Trying to move it to another chip of the same model would produce unreliable results.

      There is actually a whole lot of variance between individual silicon chips, even two chips right next to each other on the wafer will preform slightly differently. They will all meet the spec on the datasheet, but datasheets always specify ranges, not exact values.

      • BoiledCabbage 11 hours ago

        If I recall the original article, I believe it even went a step further. While running on the same chip it evolved on, if you unplugged the lamp that was in the closest outlet the chip the chip stopped working. It was really fascinating how environmentally specific it evolved.

        That said, it seems like it would be very doable to first evolve a chip with the functionality you need in a single environment, then slowly vary parameters to evolve it to be more robust.

        Or vice versa begin evolving the algorithm using a fitness function that is the average performance across 5 very different chips to ensure some robustness is built in from the beginning.

        • sitkack 2 hours ago

          > slowly vary parameters to evolve it to be more robust

          Injecting noise and other constraints (like forcing it place circuits in different parts of the device) are totally valid when it needs to evolve in-place.

          For the most part, I think it would be better to run in a simulator where it can evolve against an abstract model, then it couldn't overfit to the specific device and environment. This doesn't work if the best simulator of the system is the system itself.

          https://en.wikipedia.org/wiki/Robust_optimization

          https://www2.isye.gatech.edu/~nemirovs/FullBookDec11.pdf

          Robust Optimization https://www.youtube.com/watch?v=-tagu4Zy9Nk

        • flir 2 hours ago

          Yeah, if you took it outside the temperature envelope of the lab it failed. I guess thermal expansion?

          There were also a bunch of cells that had inputs, but no outputs. When you disconnected them... the circuit stopped working. Shades of "magic" and "more magic".

          I've never worked with it, but I've had a fascination with GA/GP ever since this paper/the Tierra paper. I do wonder why it's such an attractive technique - simulated annealing or hill climbing just don't have the same appeal. It's the biological metaphor, I think.

    • svilen_dobrev 19 hours ago

      long time ago, maybe in russian journal "Radio" ~198x, there was someone there describing that if one gets certain transistor from particular batch of particular factory/date, and connect it in whatever weird way, will make a full FM radio (or similar-complex-thing).. because they've wronged the yields. No idea how they had figured that out.

      But mistakes aside, what would it be if the chips from the factory could learn / fine-tune how to work (better) , on the run..

      • genewitch 19 hours ago

        AM radio can be "detected" with a semiconductor, so this kinda makes sense if you squint. If you can find it, someday, update this!

      • yetihehe 10 hours ago

        At my highschool, we had FM radio transmitter on the other side of street. Pretty often you could hear one of the stations in computer speakers in library, so FM radio can be detected by simple analog circuits.

      • fooker 2 hours ago

        Interestingly, radios used to be called transistors colloquially.

  • hiAndrewQuinn 21 hours ago

    I remember talking about this with my friend and fellow EE grad Connor a few years ago. The chip's design really feels like a biological approach to electrical engineering, in the way that all of the layers we humans like to neatly organize our concepts into just get totally upended and messed with.

    • pharrington 21 hours ago

      Biology also uses tons of redundancy and error correction that the generative algorithm approach lacks.

      • remexre 19 hours ago

        Though, the algorithm might plausibly evolve it if it were trained in a more hostile environment.

  • bhouston 3 hours ago

    Yup, was coming here to basically say the same thing. Amazing innovations happen when you let a computer just do arbitrary optimization/hill climbing.

    Now, you can impose additional constraints to the problem if you want to keep it using transistors properly or to not use EM side effects, etc.

    This headline is mostly engagement bait as it is first nothing new and second, it is actually fully controllable.

  • alexpotato 21 hours ago

    I read the damn interesting post back when it came out and seeing the title of the post immediately led me to thinking of Thompson's post as well.

  • markisus 3 hours ago

    The interesting thing about this project is that it shouldn’t even be possible if the chip behaved as an abstract logical circuit since then it would simply implement a finite automation. You must abuse the underlying physics to make the logic gates behave like something else.

  • userbinator 21 hours ago

    That's exactly what I thought of too when I saw the title.

    Basically brute force + gradient descent.

  • huxley 19 hours ago

    “More compact than anything a human engineer would ever come up with” … sounds more like they built an artificial Steve Wozniak

  • ivanmontillam 3 hours ago

    Reminds of disassembled executables, unintelligible to the untrained eye.

    It's even more convoluted when also re-interpreted into C language.

    Designs nobody would ever come up with, but equivalent and even with compiler tricks we'd not have known.

  • pjs_ 2 hours ago

    A classic. What's old is new again

  • codr7 20 hours ago

    And this is the kind of technology we use to decide if someone should get a loan, or if something is a human about to be run over by a car.

    I think I'm going to simply climb up a tree and wait this one out.

    What if it invented a new kind of human, or a different kind of running over?

  • cyanydeez 7 hours ago

    So, the future is reliance on undefined but reproducible behavior

    Not sure that's working out well for democracy

  • cgcrob 19 hours ago

    Relying on nuances of the abstraction and undefined or variable characteristics sounds like a very very bad idea to me.

    The one thing you generally want for circuits is reproducibility.

valine a day ago

I strongly dislike when people say AI when they actually mean optimizer. Calling the product of an optimizer “AI” is more defensible, you optimized an MLP and now it writes poetry. Fine. Is the chip itself the AI here? That’s the product of the optimizer. Or is it the 200 lines of code that defines a reward and iterates the traces?

  • catlifeonmars 21 hours ago

    Yesterday I used a novel AI technology known as “llvm” to remove dead code paths from my compiled programs.

    • sph 3 hours ago

      Say no more. Here's $100 million to take this product to market.

    • selcuka 18 hours ago

      > known as “llvm” to remove dead code paths

      Large Language Vulture Model?

  • LPisGood 20 hours ago

    Optimization is near and dear to my heart (see username), but I think it’s fine to call optimization processes AI because they are in the classical sense.

    • ragebol 2 hours ago

      Once a computer can do something, it no longer called AI but just an algorithm.

      At least, that used to be the case before the current AI summer and hype.

      • whatshisface an hour ago

        Once a computer can do something, it's just an algorithm. LLMs can't really do anything right, so they're AI. ;)

  • selcuka 18 hours ago

    > I strongly dislike when people say AI when they actually mean optimizer.

    It probably uses a relatively simple hill climbing algorithm, but I would agree that it could still be classified as machine learning. AI is just the new, hip term for ML.

    • edanm an hour ago

      What? Quite the opposite. AI is the original and broader term, ML is a subset of AI. Deep Learning was the "hot" terminology around 2015-2018, and since 2022/Chatgpt, LLM has become the trendy word. Yes, people now talk about "AI" as well, but that term has always been there, and anytime some AI technique becomes talked about, the term AI gets thrown around a lot too.

      (Note - I may have misunderstood your meaning btw, if so apologies!)

  • parsimo2010 an hour ago

    The "OG" AI research, like the the era of Minsky's AI Lab at MIT in the 1970s, broke AI into a few sub-domains, of which optimization was one. So long before we used the term AI to describe an LLM-based chat bot, we used it to describe optimization algorithms like genetic algorithms, random forests, support vector machines, etc.

  • ssivark 2 hours ago

    AI is such an ill-defined word that it's very hard to say what it's definitely not.

    Marvin Minsky -- father of classical AI -- pointed out that intelligence is a "suitcase word" [1] which can be stuffed with many different meanings.

    [1] https://www.nature.com/articles/530282a

    • zelphirkalt 2 hours ago

      I think it is as follows: We call it AI nowadays as long as we cannot clearly easily show how to get to the result, which means the computer did something that seems intelligent to us for the moment. Once we can explain things and write down a concise algorithm, we hesitate to call it AI.

      Basically, we call things AI, that we are too stupid to understand.

      • azinman2 2 hours ago

        I think what’s really happened is we get output that’s close enough to normal communications to “feel” human. I could say it’s all a giant trick, which it kind of is, but we’ve also gotten to the point where the trick it also useful for many things that previously didn’t have a good solution.

  • scotty79 a day ago

    The chip is not called AI-chip but rather AI-designed chip. At least in the title.

    • valine 21 hours ago

      My point is that it’s equally ridiculous to call either AI. If our chip here is not the AI then the AI has to be the optimizer. By extension that means AdamW is more of an AI than ChatGPT.

      • ulonglongman 20 hours ago

        I don't understand. I learnt about optimizers, and genetic algorithms in my AI courses. There are lots of different things we call AI, from classical AI (algorithms for discrete and continuous search, planning, sat, Bayesian stuff, decision trees, etc.) to more contemporary deep learning, transformers, genAI etc. AI is a very very broad category of topics.

        • valine 19 hours ago

          Optimization can be a tool used in the creation of AI. I'm taking issue with people who say their optimizer is an AI. We don't need to personify every technology that can be used to automate complex tasks. All that does is further dilute an already overloaded term.

          • layer8 19 hours ago

            I agree that the article is wrong in using the wording “the AI”. However, firstly the original publication [0] doesn’t mention AI at all, only deep-learning models, and neither do any of the quotes in the article. Secondly, it is customary to categorize the technology resulting from AI research as AI — just not as “an AI”. The former does not imply any personification. You can have algorithms that exhibit intelligence without them constituting any kind of personal identity.

            [0] https://www.nature.com/articles/s41467-024-54178-1

          • kadoban 19 hours ago

            It's not "an AI", it's AI as in artificial intelligence, the study of making machines do things that humans do.

            A fairly simple set of if statements is AI (an "expert system" specifically).

            AI is _not_ just talking movie robots.

          • soulofmischief 18 hours ago

            Who said it was 'an AI'? Do you understand what intelligence means? And what artificial means?

        • Nition 18 hours ago

          In game dev we've called a bunch of weighted If statements AI since the 80s. Sometimes they're not even weighted.

          • fc417fc802 16 hours ago

            I think that's a bit different. The term is overloaded. There's "the machine is thinking" AI and then there's "this fairly primitive code controls an agent" AI. The former describes the technique while the latter describes the use case.

            Clippy was an AI but he wasn't an AI.

      • saltcured 18 hours ago

        Artificial intelligence, as others are using it here to cover a broad field of study or set of techniques. You seem to be objecting because the described product is not "an artificial intelligence", i.e. an artificial mind.

        For some of us, your objection sounds as silly as if we were to tell some student they didn't use algebra, because what they wrote down isn't "an algebra".

        • coderenegade 17 hours ago

          You use optimization to train AI, but we usually refer to AI as being the parametrized function approximator that is optimized to fit the data, not the optimizer or loss function themselves.

          This is "just" an optimizer being used in conjunction with a simulation, which we've been doing for a long, long time. It's cool, but it's not AI.

      • TeMPOraL an hour ago

        Yes. It's the optimizer here that's called "AI" because AIs are optimizers - and so are humans. It's a matter of sophistication.

      • rowanG077 19 hours ago

        I don't understand what your gripe is. Both are AI. Even rudimentary decision trees are AI.

        • coderenegade 17 hours ago

          There's no function here that is analogous to a decision tree, or a parametrized model, just an optimizer and a loss function with a simulator. This isn't AI in the way it's commonly understood, which is the function that takes an input and produces a learned output.

          • kadoban 16 hours ago

            The entire point of the thing is that it takes an input and produces an output. The output is the designed chip.

            • coderenegade 15 hours ago

              An optimizer produces a single optimized set of parameters. AI is a (usually parametrized) function mapping a collection of input states to a collection of output states. The function is the AI, not the optimizer. I'd suggest anyone who thinks otherwise go and do some basic reading.

        • valine 19 hours ago

          Lets just call every turning complete system AI and be done with it.

  • AlienRobot 3 hours ago

    Is there anything we can even call AI that would be correct?

  • satvikpendem 20 hours ago

    Sigh, another day, another post I must copy paste my bookmarked Wikipedia entry for:

    > "The AI effect" refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements.[4][2][1] Edward Geist credits John McCarthy for coining the term "AI effect" to describe this phenomenon.[4]

    > McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked."[5] It is an example of moving the goalposts.[6]

    > Tesler's Theorem is:

    > AI is whatever hasn't been done yet.

    > — Larry Tesler

    https://en.wikipedia.org/wiki/AI_effect

    • dijksterhuis 2 hours ago

      Prior to 2021/202-whenever, most sensible people called this stuff deep learning / machine learning etc. For over 15+ years it’s been called machine learning — “getting machines to complete tasks without being explicitly programmed to do so”.

      since 2021/whenever LLM applications got popular everyone has been mentioning AI. this happened before during the previous mini-hype cycle around 2016-ish where everyone was claiming neural networks were “AI”. even though, historically, they were still referred to by academics as machine learning.

      no-one serious, who actually works on these things; isn’t interested in making hoardes of $$$ or getting popular on social media, calls this stuff AI. so if there were a wikipedia link one might want to include on this thread, I’d say it would be this one — https://en.m.wikipedia.org/wiki/Advertising

      because, let’s face it, advertising/marketing teams selling products using linear regression as “AI” are the ones shifting the definition into utter meaninglessness.

      so it’s no surprise people on HN, some of whom actually know stuff about things, would be frustrated and annoyed and get tetchy about calling things “AI” (when it isn’t) after 3 sodding years of this hype cycle. i was sick of it after a month. imagine how i feel!

      - edit, removed line breaks.

      • aidenn0 2 hours ago

        Machine learning is a subfield of AI. Complaining about calling ML AI is like complaining about calling Serena Williams an "athlete" because she's actually a "tennis player"

        • dijksterhuis an hour ago

          You've missed the point I was making it seems, so I'll condense and focus down on it.

          The reason why the "AI" goalposts always seem to shift -- is not because people suddenly decide to change the definition, but because the definition gets watered down by advertising people etc. Most people who know anything call this stuff deep learning/machine learning to avoid that specific problem.

          Personally, I can't wait for people who work in advertising to get put on the same spaceship as the marketers and telephone sanitizers. (It's not just people in advertising. i just don't like advertising people in particular).

          --

          I'd argue machine learning is actually a sub-field within statistics. but then we're gonna get into splitting hairs about whether Serena Williams is an athlete, or a professional sports player. which wasn't really the point I was making and isn't actually that important. (also, it can be a sub-field of both, so then neither of us is wrong, or right. isn't language fun!).

    • taberiand 19 hours ago

      We'll never build true AI, just reach some point where we prove humans aren't really all that intelligent either

      • AlienRobot 3 hours ago

        AI is when Einstein is your butler.

    • fc417fc802 17 hours ago

      > It is an example of moving the goalposts.

      On the contrary. The "AI effect" is an example of attempting to hold others to goalposts that they never agreed to in the first place.

      Instead of saying "this is AI and if you don't agree then you're shifting the goalposts" instead try asking others "what future developments would you consider to be AI" and see what sort of answers you get.

janice1999 a day ago

Is this really so novel? Engineers have been using evolutionary algorithms to create antennas and other components since the early 2000s at least. I remember watching a FOSDEM presentation on an 'evolved' DSP for radios in the 2010s.

https://en.wikipedia.org/wiki/Evolved_antenna

  • happytoexplain a day ago

    I don't believe it's comparable. Yes, we've used algorithms to find "weird shapes that work" for a long time, but they've always been very testable. AI is being used for more complex constructs that have exponentially-exponentially greater testable surface area (like programs and microarch).

  • xanderlewis 21 hours ago

    This is really interesting and I’m surprised I’ve never even heard of it before.

    Now I’m imagining antennas breeding and producing cute little baby antennas that (provided they’re healthy enough) survive to go on to produce more baby antennas with similar characteristics, and so on…

    It’s a weird feeling to look at that NASA spacecraft antenna, knowing that it’s the product of an evolutionary process in the genuine, usual sense. It’s the closest we can get to looking at an alien. For now.

    • jhot 21 hours ago

      Two antennas get married. The wedding was ok but the reception was great!

  • nyeah 3 hours ago

    Yes, for low-frequency analog circuits these experiments go back to the 1990s at least.

    J. R. Koza, F. H Bennett, D. Andre, M. A. Keane, and F. Dunlap, “Automated synthesis of analog electrical circuits by means of genetic programming,” IEEE Trans. Evol. Comput., vol. 1, pp. 109–128, July 1997. https://dl.acm.org/doi/10.1109/4235.687879

  • 1970-01-01 an hour ago

    Yes, it's nothing novel. But it is AI adjacent news, so it automagically becomes a headline.

pmlnr 20 hours ago

    These are highly complicated pieces of equipment almost as complicated as living organisms.
    ln some cases, they've been designed by other computers.
    We don't know exactly how they work.
Westworld, 1973
  • lionkor 2 hours ago

    Except outside of science fiction, it'll just be horribly broken once you put it to use in the real world

    • mprev 2 hours ago

      To be fair, most of the science fiction is about it being horribly broken or, at least, functioning in ways its human stewards did not intend.

    • bredren 2 hours ago

      This may have been explored but how much different is this from natural phenomena which we have only theories for their behavior?

      That is, people have worked with aspects of physics and horticulture to use long before understanding the science. Also, with varying success.

      Could LLM-generated AI artifacts be thought of in similar lines?

    • pmlnr an hour ago

      that's basically what the movie is about.

      • hinkley 21 minutes ago

        Yul Brynner running around murdering humans.

arnaudsm 2 minutes ago

I've seen junior code "so weird that humans cannot understand them".

exabrial 2 hours ago

This comment (not mine) from the article is absolute Gold:

> "Not only did the chip designs prove more efficient, the AI took a radically different approach — one that a human circuit designer would have been highly unlikely to devise."

> That is simply not true... more likely, a human circuit designer would not be allowed to present a radical new design paradigm to his/her superiors and other lead engineers. (a la Edison, Westinghouse, Tesla, Da Vinci, et-al.)

NitpickLawyer a day ago

> AI models have, within hours, created more efficient wireless chips through deep learning, but it is unclear how their 'randomly shaped' designs were produced.

IIRC this was also tried at NASA, they used some "classic" genetic algorithm to create the "perfect" antenna for some applications, and it looked unlike anything previously designed by engineers, but it outperformed the "normal" shapes. Cool to see deep learning applied to chip design as well.

  • Frenchgeek a day ago

    Wasn't there an GA FPGA design to distinguish two tones that was so weird and specific not only did it use capacitance for part on its work but literally couldn't work on another chip of the same model?

    • hinkley 18 minutes ago

      As I recall it didn’t even work from day to day due to variance in the power supply triggered by variance in the power grid.

      They had to redo the experiment on simulated chips.

    • isoprophlex a day ago

      Yes, indeed, although the exact reference escapes me for the moment.

      What I found absolutely amazing when reading about this, is that this is exactly how I always imagined things in nature evolving.

      Biology is mostly just messy physics where everything happens at the same time across many levels of time and space, and a complex system that has evolved naturally appears to always contain these super weird specific cross-functional hacks that somehow end up working super well towards some goal

    • actionfromafar a day ago

      I think it was that or a similar test where it would not even run on another part, just the single part it was evolved on.

mikewarot 19 hours ago

I've only started to look into the complexities involved in chip design (for my BitGrid hobby horse project) but I've noticed that in the Nature article, all of the discussion is based on simulation, not an actual chip.

Let's see how well that chip does if made by the fab. (I doubt they'd actually make it, likely there are a thousand design rule checks it would fail)

If you paid them to over-ride the rules at make it anyway, I'd like to see if it turned out to be anything other than a short-circuit from Power to Ground.

  • mng2 17 hours ago

    They do have some measurement results in figures 6 and 7. Looks like they didn't nail the center frequencies but at mmWave it's reasonable for a first attempt -- they're still missing something in their model though, same as if you did it by hand.

    I'm skeptical that these pixelated structures are going to turn out anything better than the canonical shapes. They look cool but may just be "weird EM tricks", deconstructing what doesn't really need to be. Anyone remember the craze for fractal antennas?

zahlman 21 hours ago

If we can't understand the designs, how rigorously can we really test them for correctness?

  • molticrystal 21 hours ago

    Our human designs strive to work in many environmental conditions. Many early AI designs, if iterated in the real world, would incorporate local physical conditions into their circuits. For example, that fluorescent lamp or fan I'm picking up(from the AI/evolutionary design algorithm's perspective) has great EM waves that could serve as a reliable clock source, eliminating the need for my own. Thus if you move things it would break.

    I am sure there are analogous problems in the digital simulation domain. Without thorough oversight and testing through multiple power cycles, it's difficult to predict how well the circuit will function, and how incorporating feedback into the program will affect its direction, if not careful, causing the aforementioned strange problems.

    Although the article mentions corrections to the designs, what may be truly needed is more constraints. The better we define these constraints, the more likely correctness will emerge on its own.

    • skissane 21 hours ago

      > Our human designs strive to work in many environmental conditions. Many early AI designs, if iterated in the real world, would incorporate local physical conditions into their circuits. For example, that fluorescent lamp or fan I'm picking up(from the AI/evolutionary design algorithm's perspective) has great EM waves that could serve as a reliable clock source, eliminating the need for my own. Thus if you move things it would break.

      This problem may have a relatively simple fix: have two FPGAs – from different manufacturing lots, maybe even different models or brands – each in a different physical location, maybe even on different continents. If the AI or evolutionary algorithm has to evolve something that works on both FPGAs, it will naturally avoid purely local stuff which works on one and not the other, and produce a much more general solution.

      • IsTom 3 hours ago

        And then you change temperatre/elevation/move it next to a router and it falls apart, because after all there is going to be something correlated.

        • FeepingCreature 2 hours ago

          Great, so use ten. Use a hundred. Spread them around. Put one on the ISS.

          The problems just have to be uncorrelated.

      • reissbaker 3 hours ago

        This is similar to why increasing the batch size during LLM training results in better performance: you force the optimizer to generalize to a larger set.

  • PhilipRoman 20 hours ago

    Ask the same "AI" to create a machine readable proof of correctness. Or even better - start from an inefficient but known to be working system, and only let the "AI" apply correctness-preserving transformations.

    • AndroTux 17 hours ago

      I don’t think it’s that easy. I’m sure Intel, AMD and Apple have a very sophisticated suite of “known working systems” that they use to test their new chips, and they still build in bugs that security researchers find 5 years later. It’s impossible to test and verify such complex designs fully.

  • djmips 21 hours ago

    Especially true if the computer design creates a highly coupled device that could be process sensitive.

  • 42lux 21 hours ago

    Results?

    • evrimoztamur 21 hours ago

      Can you always test the entire input space? Only for a few applications.

      • 42lux 21 hours ago

        I am really curious about how you test software...

        • hansvm 3 hours ago

          It's a little different in software. If I'm writing a varint decoder and find that it works for the smallest and largest 65k inputs, it's exceedingly unlikely that I'll have written a bug that somehow affects only some middling number of loop iterations yet somehow handles those already tested transitions between loop iteration counts just fine.

          For a system you completely don't understand, especially when the prior work on such systems suggests a propensity for extremely hairy bugs, spot-checking the edge cases doesn't suffice.

          And, IMO, bugs are usually much worse the lower down in the stack they appear. A bug in the UI layer of some webapp has an impact and time to fix in proportion to that bug and only that bug. Issues in your database driver are insidious, resulting in an unstable system that's hard to understand and potentially resulting in countless hours fixing or working around that bug (if you ever find it). Bugs in the raw silicon that, e.g., only affect 1 pair of 32-bit inputs (in, say, addition) are even worse. They'll be hit in the real world eventually, and they're not going to be easy to handle, but it's simultaneously not usually practical to sweep a 64-bit input space (certainly not for every chip, if the bug is from analog mistakes in the chip's EM properties).

        • AndroTux 17 hours ago

          Literally no piece of software is bug-free. Not one. What are you talking about? Of course it’s impossible to test all inputs, because there’s going to be inputs that you can’t even convince of at the time of designing. What if your application suddenly runs at 1000000x the intended speed because hardware improves so much? How do you test for that?

          • 42lux 9 hours ago

            Hardware doesn’t change over time…

            • AndroTux 9 hours ago

              Yes it does. It ages. But even if it doesn't, my point still stands. Or are you insinuating that the engineers over at Intel, AMD and Apple don't know what they're doing, because clearly their CPUs aren't flawless and still have bugs, like Spectre/Meltdown.

              • 42lux 8 hours ago

                It deteriorates, it doesn't change. The functionality is still there and no modern hardware deteriorates to a failing state before it gets obsolete. Yes, I am insinuating that the engineers at intel, AMD, apple and nvidia are incentivized to prioritize expedient solutions over developing more robust architectures, as evidenced by vulnerabilities like Spectre and Meltdown.

  • timdiggerm 3 hours ago

    Were we ever doing that though?

  • xarope 15 hours ago

    following classic TDD, use novel "AI" to write millions of test cases.

    (forgive me, my fellow HNers...)

  • pfdietz 3 hours ago

    Evolution seems to work at producing "designs" and there's no understanding there at all.

diabllicseagull 3 hours ago

Pieces like this remind me that even professors need to sell what they do, like saying "Humans cannot really understand them." in this case. Never have we ever had more simulation tools and compute power like we have today and we can't understand how these chips really work?

I think this is an example of mystifying-for-marketing as used in academia, like portraying this research as some breakthrough at a level that exceeds human understanding. IMHO practitioners of science should be expected to do better than this.

  • jampekka 2 hours ago

    It's not necessarily the professor really saying that. Journalists (and university press offices) like to have such lines in pop science articles, and how it goes is that there's an interview from which the writer "interprets" some quotes. These are typically sent to the interviewee to check, but many don't bother to push back so much of it's not egregiously bad.

adpirz 3 hours ago

I’ve never been able to put it into words, but when we think about engineering in almost any discipline, a significant amount of effort goes into making things buildable by different groups of people. We modularize components or code so that different groups can specialize in isolated segments.

I always imagined if you could have some super mind build an entire complex system, it would find better solutions that got around limitations introduced by the need to make engineering accessible to humans.

  • heisenbit 2 hours ago

    An "optimal" solution may do away with "wasteful" abstraction of interfaces and come up with something more efficient. But there is wisdom in narrow interfaces and abstractions. Structure helps to evolve over time which at least for now most computer optimization focuses on getting the best solution now.

  • zemvpferreira 3 hours ago

    I think it’s half-guess and half-hope but I imagine we’ll spend centuries building really dumb mechanism, then suddenly be completely left in the dust intellectual. I guess that’s what you’d call the singularity. I don’t know if that hypermind will bother designing circuits for us.

  • jayd16 3 hours ago

    Doesn't need a supermind to prove this is possible. Mere mortals and simple compilers can inline functions and trade abstraction for performance.

z3t4 2 hours ago

As I kid I played a competitive text based strategy game, and I made my own crude simulation that randomly tried different strategies. I let the simulation run for a few days with billions of iterations, and it came up with a very good gameplay strategy. I went from being ranked below 1000 to top 10 using that strategy.

I also wrote programs that simulated classic game shows like the 3 doors, where you either stay with one door or change door. After running the simulation one million time it ended up with 66% chance of winning if you changed door. The teacher of course didn't believe me as it was too hard a problem for a highscooler to solve, but many years later I got it confirmed by a math professor that prooved it.

Computers are so fast that you don't really need AI learning to iterate, just run a simulation randomly and you will eventually end up with something very good.

I think this might be a use case for quantum computers, so if you have a quantum computer I'm interested to work with you.

pradn 2 hours ago

There's a great paper that collects a long list of anecdotes about computational evolution.

"The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities"

[1] https://direct.mit.edu/artl/article/26/2/274/93255

calibas 2 hours ago

I think it's pure AI hype to claim these are beyond human understanding, and I doubt that's what the professor really meant. There's real physical processes going on, and we can study them carefully to eventually learn how they work. We just don't understand them yet.

It's religion that claims reality is beyond human understanding, it's not something scientists should be doing.

whywhywhywhy 2 hours ago

Thought tiny wireless antennas were already dark magic that people barely understood anyway was more trial and error. Feels like yet another so called science publication doing a clickbait headline.

faramarz 3 hours ago

That's an approach.

Just last night I took a similar approach to arriving a number of paths to take when I shared my desired output with a knowledge graph that I had populated and asked the AI to fill in the blank about the activities that would lead a user to my desired output. it worked! I got a few none-corralative gaps that came up as well and after some fine tuning, got included in the graph to enrich the contentious output.

I feel this is a similar approach and it's our job to populate and understand the gaps in between if we are trying to understand how these relationships came to existence. a visual mind map of the nodes and the entire network is a big help for a visual learner like myself to see the context of LLMs better.

anyway, the tool I used is InfraNodus and am curious if this community is aware of it, I may have even discovered it on HN actually.

rwj an hour ago

Also see wok done on topological optimization. Mechanical designs no human would design, but AI not required either, just numerical optimization.

DrNosferatu 20 hours ago

Its inevitable: software (and other systems) will also become like this.

  • invalidusernam3 2 hours ago

    One of the junior developers I worked with years ago wrote code that humans couldn't understand, maybe he was was just ahead of his time

  • satvikpendem 19 hours ago

    I've been using Cursor, it already is. I've found myself becoming merely a tester of the software rather than a writer of it, the more I use this IDE.

    • DrNosferatu 19 hours ago

      It’s a bit clunky still IMHO. Or you found a good tutorial to leverage it fully?

      • virgildotcodes 2 hours ago

        It’s really been advertised heavily lately but I just discovered it a couple weeks ago, and in case you’re unaware the real aha moment with Cursor for me was Composer in Agent mode with Sonnet 3.5.

        If you want the highest chance of success, use a reasoning model (o3-mini high, o1 pro, r1, grok 3 thinking mode) to create a detailed outline of how to implement the feature you want, then copy paste that into composer.

        It one shots a lot of greenfield stuff.

        If you get stuck in a loop on an issue, this prompt I got from twitter tends to work quite well to get you unstuck: "Reflect on 5-7 different possible sources of the problem, distill those down to 1-2 most likely sources, and then add logs to validate your assumptions before we move onto implementing the actual code fix."

        Just doing the above gets me through 95% of stuff I try, and then occasionally hopping back out to a reasoning model with the current state of the code, errors, and logs gets me through the last 5%.

  • codr7 20 hours ago

    And then it's pretty much game over.

    • DrNosferatu 20 hours ago

      It’s better we [democracies] ride and control the AI change of paradigm than just let someone else do it for us.

      • pessimizer 19 hours ago

        "Democracy" is just a chant now. It's supposed to somehow happen without votes, privacy, freedom of expression, or freedom of association.

        • DrNosferatu 19 hours ago

          Well, Democracy is still the least worst of all political systems!

          But please: would you prefer something else?

          • codr7 18 hours ago

            The point is there is no difference except spelling.

    • DrNosferatu 10 hours ago

      Not game over: it’s just that Engineering will turn into Biology :D

      • int_19h 8 hours ago

        Psychotherapy, rather, as the natural evolution of prompting.

bli940505 32 minutes ago

When r we gonna see these in production and actually used?

phendrenad2 2 hours ago

And the fact that humans "cannot understand it" means that it's likely overfitted to the job. If you want to make slight modifications to the design, you'll likely have to run the AI tool over again and get a completely new design, because there's zero modularity.

choxi 20 hours ago

Maybe we’re all just in someone’s evolutionary chip designer

awinter-py 21 hours ago

> The AI also considers each chip as a single artifact, rather than a collection of existing elements that need to be combined. This means that established chip design templates, the ones that no one understands but probably hide inefficiencies, are cast aside.

there should be a word for this process of making components efficiently work together, like 'optimization' for example

  • fc417fc802 16 hours ago

    This is a strange distinction for the article to point out. If you want to take a more modular approach all you have to do is modify the loss function to account for that. It's entirely arbitrary.

p0w3n3d 2 hours ago

I wonder about security of such designed chips. We've been demonstrated that apparently optimal architecture can lead to huge errors that create security flaws (spectre, Pacman for M1 etc).

elzbardico 2 hours ago

They make no mention of the kind of algorithm/model they used. I believe it was not an LLM, was it?

lwhi 2 hours ago

I'm sure AI produced code will be unintelligible to humans soon too.

myrandomcomment 3 hours ago

All the way at the bottom after all the amazing claims "many of the designs produced by the algorithm did not work."

sonorous_sub an hour ago

tool assisted speedrun produces unreadable spaghetti code

ship it

karaterobot 2 hours ago

Hey, some of us didn't understand regular chips anyway.

lasermike026 3 hours ago

When I see something I don't understand I use AI to help me understand it.

mwkaufma 20 hours ago

"In particular, many of the designs produced by the algorithm did not work"

aiono 3 hours ago

That's kind of stuff that really makes me excited about AI.

anshumankmr 14 hours ago

No wonder YC was looking for startups working in this field.

6d6b73 3 hours ago

AI designed electronics and software will be security nightmare, at least in the beginning.

whatever1 3 hours ago

I mean all of the complex operations research optimal solutions are not graspable by human brain. See a complex travelling salesman solution with delivery time windows and your head will spin, you will be wondering how come that solution is optimal. But then you try your rational heuristic and it sucks compared to the real optimal.

the_real_cher 17 hours ago

Judging by the code it outputs we need to because I have to constantly fix most code LLMs output.

  • coderenegade 17 hours ago

    Vast chunks of engineering are going to be devalued in the next 10-15 years, across all disciplines. It's already enabling enormous productivity gains in software, and there's zero reason this can't translate to other areas. I don't see any barrier to transformers being able to write code-cad for a crankshaft or a compressor, for example, other than the fact that so far they haven't been trained to do so. Given the extent to which every industry uses software for design, there's nothing to really stop the creation of wrappers and the automation of those tasks. In fact, proprietary kernels aren't even a barrier, because the gains in productivity make building a competitor easier than ever before.

    • bigstrat2003 2 hours ago

      I certainly disagree that it's enabling enormous productivity gains in software. It's a productivity loss to have a tool whose output you have to check yourself every time (because you can't trust it to work reliably).

citizenpaul 15 hours ago

>that pitfalls remain “that still require human designers to correct.” In particular, many of the designs produced by the algorithm did not work

So? Nothing.

mupuff1234 3 hours ago

Didn't realize I have so much in common with AI designed chips.

andrewfromx a day ago

now we are talking! next level for sure.