бытовая техника

Seeking PC graphics: Silicon shortages deepen, and old chips reawaken

As many of you are likely already aware, semiconductor demand is significantly outstripping supply at the moment, particularly for ICs fabricated at foundries (versus captive capacity owned by Intel, Samsung, etc.) and double particularly for ICs fabricated on leading-edge lithographies. The problem’s not new, actually, although the are increasingly evident (in part because of reality, in part because the situation’s now regularly covered by various media and ). TVs and other consumer electronic devices, and new cars are just some of the IC-inclusive systems in short supply, and the situation’s expected to .


Big picture, multiple factors are behind this shortage, including the following:



  • COVID caused folks to retreat to their residences full-time, both for work (for which they snapped up all remaining inventory of computers, networking gear, webcams, headsets and the like) and entertainment (TVs, streaming boxes, tablets, gaming consoles, etc.)

  • That same evacuation pulled people away from semiconductor fabs, assembly and test facilities, warehouses, and other source and intermediary supply chain steps. And even if chips (and the systems containing them) make it out of the manufacturing facility, they may still be .

  • , home of along with UMC and others—a situation that’s particularly critical given how water-intensive the semiconductor fabrication process is. Note that the , home of numerous fabs owned by Intel and others, but hasn’t yet led to cutbacks.

  • Shortages are causing systems companies to “panic-buy,” multiplying their actual demand in the hope of obtaining a percentage of the resultant inflated order amount. If you’re a smart semiconductor supplier (with analogies to toilet paper manufacturers a year ago, for example), you won’t build sufficient capacity to meet this exaggerated demand, because you’ll end up with costly fallow overcapacity once sanity returns to the marketplace.


The availability situation is even worse with graphics cards and the various components they’re based on. GPUs from AMD and Nvidia—specifically the highest-end and most profitable product families (and members of those families)—tend to leverage the latest-and-greatest fabrication processes, as do the blazing fast and ultra-dense memories that act as the on-board frame buffers mated to those GPUs. Demand for those graphics cards (as well as the PCs that contain them) has exceeded the earlier-mentioned work-from-home consumption spike, because they’re also used for entertainment.


But two additional, newer applications for high-end graphics are putting a substantial incremental strain on supply. They’re ironically both showcase examples of the general-purpose graphics (GPGPU) concept that I first wrote about in-depth more than 15 years ago (see “”); Nvidia in particular had made significant long-term investments in GPGPU from both silicon and software standpoints with little return for a long time, making the recent success all the more sweet, no matter how frustrating the inability to meet the entirety of demand might be at the moment.


The first new application is deep learning, which I’ve written about quite a bit of late (for example, see “” and “”). It turns out that the same sort of massively parallel arithmetic processing that GPUs use to transform polygons and other high-level primitives into pixels is equally beneficial in accelerating the initial model training phase of deep learning project development. GPUs can also be used for subsequent inference processing, as Nvidia’s burgeoning success in the semi- and fully-autonomous vehicle market exemplifies, although the comparatively more focused function requirements with inference, coupled with comparatively more stringent low-power consumption requirements—something that GPUs aren’t historically known for—opens the door to a broader set of competitive acceleration alternatives.


The other and even more is in the . Here, GPUs’ massively parallel processing is once again advantageous (as is, ironically, the in ). The GPUs’ video outputs find no use here, thereby explaining Nvidia’s development of . Nvidia has also striven to discourage GPUs’ use as cryptocurrency “mining” accelerators by when the GPUs detect they’re running a hashing algorithm, although initial attempts were , in no small part aided by the . And perhaps obviously, bitcoin-related demand waxes and wanes with the rise and fall of the cryptocurrency market value.


How many graphics cards are “miners” buying right now? Longtime, well-respected market analyst firm Jon Peddie Research says just in the first quarter of this year. And although JPR forecasts (and ) that crypto-related demand is beginning to abate, partially because instead, we’ve been here before; and the boom-and-bust cycle will likely recur in the future.


How has this all affected me? Well, for the Intel-based system whose build planning I’ve been blogging about of late (see “”), I plan to use an AMD Radeon RX 570-based graphics board, the , that I thankfully had bought back in the summer of 2019. I’d originally intended to use it in a —a project that to date I haven’t gotten around to actualizing. At the time, the board cost me ; as I write this there’s a , along with a (here’s a hint: don’t buy a used graphics card right now, because you don’t know whether or not it’s previously been used for crypto “mining,” with its lifetime-shortening consequences). Keep in mind as you read about this price inflation that the was introduced in the ; it’s more than four-year-old tech at this point.



For the AMD system, I’m still kicking myself that I’d ended up returning for a refund an Nvidia GeForce GTX 1650-based graphics card (the ) that . My original plan was to use it in one of my , until I realized post-purchase that MacOS never supported the GTX 1650, only its . I’d still been tempted to hold onto it for Windows-only use, until I noticed that although it claimed to only require the PCI Express bus as its power source (versus also needing a dedicated power supply feed, which the proprietary PSUs in the “Hackintosh” systems didn’t natively support), it drew 85W of peak power—well beyond PCI Express’s 75W-max spec. I’m kicking myself because that same graphics card, which I could have instead used on one of these built-by-me Windows-only systems, is now selling new on Newegg for $344.60 (plus $20.00 for shipping). The GTX 1650 was introduced in .



Instead, in January I found and purchased an open-box card (and felt very fortunate to be able to do so even at that inflated price; that same card brand new is now as I type these words).



You lose some, but you also sometimes win some. Back in September of last year, I’d bought an Nvidia GeForce GTX 1050-based card, , for (minus a further $10 for a manufacturer rebate):



My original intent was to hold onto it as a Hackintosh system “spare,” but after watching the price inflation of recent months, I eventually decided to offload my “investment” instead. On May 2 of this year (my birthday, ironically) I successfully sold it on eBay for $210.27. Even after subtracting out eBay’s fees, I still turned a tidy profit. Again, keep in mind that the GeForce GTX 1050 Ti was introduced in , and ironically was .


Based on this experience, you might be thinking that I’m also pondering selling the AMD Radeon RX 570-based graphics board I mentioned earlier, and then re-purchasing it (or a successor) once price sanity returns to the market. And you’d be right, although I haven’t (yet) pulled the trigger; the Intel CPU has integrated graphics, after all, so I could just rely on them in the near term. What do you think I should do? Sound off with your thoughts on this or anything else I’ve discussed in the comments!


is Editor-in-Chief of the Embedded Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.


Related articles:
















The post appeared first on .

[fixed][/fixed]