I'd be interested in other takes on this article, I'm struggling to accept the arguments presented here.
> turning instead to software to handle functions that had historically been done in hardware
That generally means doing things less efficiently. Devices will get slower and battery lifetimes shorter.
> There’s room for new startups and new ways of doing things. More of the car’s features are accessed using a touchscreen as opposed to a knob or button, for example, which turns components that once were hardware into a software program.
This has got to be one of the worst times to do a hardware startup. Doing R&D at an established company is already painful, between skyrocketing costs and 6 month plus lead times on standard microcontrollers (not to mention more specialized ICs). If you're just starting out instead of "only" facing trouble keeping production going you're going have to hope that the chips you're prototyping with will be available when you go into production. If things keep going the way they have this year, that's a very risky bet no matter what components you select.
> And now, in light of the chip shortage, suppliers to the automotive industry are going to be looking for more ways to turn something that used to be an electronic part into software.
Not in the automotive field but from what I've heard it's the exact opposite issue, the [lack of] processors they'd like to use in those fancy touchscreen infotainment systems are a big part of their problem.
The central thesis is that the shortage will lead to innovation, but in my experience it has put a HUGE strain on everyone doing R&D. I suspect it'll instead lead to rising costs and stagnation as people wait for supplies. This is already basically what we have in the consumer hardware market, where last year's consoles still aren't consistently on store shelves.
> The chip shortage could lead to an era of hardware innovation
It can, but people currently busy "innovating" are at least half a decade too late. 200mm capacity shortage, and year long backlogs were common for at least 5 year now.
The opportunity from either exploiting the shortage, or ameliorating it is now long gone.
> That’s because electric cars require a lot more computer and chip-based components than those powered by gas
Electric cars require a lot less electronic components!!!
> “I saw a design the other day of a lightbulb with 12 components,” said Carrieres. “[My team] asked me if this was an idea, and I had to tell them that it was a real product.”
Recently we had a rush project for redesigning a control panel for a certain machinery. The task was to replace, or remove MCUs currently experiencing shortage.
After looking at the project for a few minutes, we realised MCU was used basically just for blinkey LEDs, and an interlock.
Our engineer slapped few 555s, flip flops, and registers in under an hour. Client engineers' jaws fell off.
While the real reason client was at the mercy of their supply chain was not that individual MCU, it still shows just how much blind following of "one chip to rule them all" can eat away at supply chain resilience.
If you have a part uniquely fitting your unique needs, the likeliness it being some rare widget, and having poorer availability increases.
I find it a bit hard to believe that a whole microcontroller can be cheaper than a few electronic components on their own. Probably what did happen before the shortage was that the MCU was cheap enough and gave you a great flexibility in prototyping and coudl be suited for many different tasks. After the prototyping stage, it should've been replaced by simpler things, but since good enough was cheap enough, it was left as it was.
When it comes down to very simple microcontrollers with only a few pins, the package costs starts to dominate. Consider that each pin on an IC costs a few cents. If a small pincount MCU is replaced with a 555, some discrete logic, and passives for each of those ICs, it certainly can be more expensive to have discrete logic as opposed to an MCU, not to mention the higher cost from increase in PCB size for the extra components.
Yeah, I guess economy of scale means that you can manufacture 1,000,000,000 identical MCUs which are flexible and multi-purpose. Up to the point where you can't and it turns out some (most?) of them can be replaced by simple circuits with a few components.
Also remember that a bigger bill of materials in itself potentially complicates supply chain. The other thing that comes to mind is that more components may mean more passives or a more complex circuit.
> I suspect replacing the MCU with discrete components would have resulted in a more expensive product if there was no shortage.
And what? Complexity, and BOM size are not a problem by themselves alone. You should not view it in isolation from the context.
This is exactly the thinking which led the client running into this — "Trying to save money at any cost"
The point I make is that they easily spent a multiple times the price of this panel entire production run as a punishment for that.
Blinkey leds, and an interlock in between few buttons is really an overkill for MCU, and should've been treated as such no matter how many cents can be saved.
You can save a few dollars on a screwdriver if you can put in screws with a hammer, but you are certainly better off not doing that.
There are an infinite number of ways to make a project less fragile to changing market conditions. But you can't maximize all of it, it's all trade offs. Would making a project more resilient to semiconductor shortages increase vulnerability to other factors? Perhaps the increased electrical load pushes the battery requirements above the budgeted space. Perhaps the increased costs puts the product above the budget for customers, and making the project no longer viable in the marketplace. There are too many factors that I don't know about to possibly comment on all of it. The replacement of the MCU with discrete components was mentioned, and I commented on a few of the factors with could be involved with the decision, and did not claim to have made an exhaustive analysis of the product spec to determine why the choices were made.
> And what? Complexity, and BOM size are not a problem by themselves alone. You should not view it in isolation from the context.
Sure. So what's important about context? Is it impossible to have a 555 shortage?
> Blinkey leds, and an interlock in between few buttons is really an overkill for MCU, and should've been treated as such no matter how many cents can be saved.
Would you refuse to use 1% resistors if they were cheaper than 10% resistors? Overkill isn't bad by itself.
> You can save a few dollars on a screwdriver if you can put in screws with a hammer, but you are certainly better off not doing that.
It's more like a set of 100 drill bits being cheaper than the specific 20 you need all wrapped in individual boxes.
I can confirm that this is an extremely challenging time to do a hardware startup. In addition to the supply and freight shortages that are deadly to startups stuck at the back of the line:
1. Travel restrictions make it extremely challenging to evaluate suppliers, do joint development, and bring up manufacturing.
2. A bunch of recent high profile failures have caused VCs to cool on the space.
3. Google, Facebook, Amazon, and Microsoft are all hiring armies of hardware people, making the job market more competitive than it once was.
All that said:
1. There’s still no shortage of interesting and worthwhile problems to solve with hardware.
2. Like web companies that survived the dot com bubble bursting, the hardware companies that make it through 2020 and 2021 are going to have been strengthened by it.
Please provide examples of (4). I have worked for HW companies pretty much all of my career and (4) is news to me. I would very much like to understand what you're seeing.
Let's just say regarding a major facility for a major CM over the last few years myself and agents have dealt with them a few times, the first time a well-meaning manager said "stay away, this company is bad news". Everything I have since learned (tour of facility witnessing old tech, dinosaur client base, partial selloff of site, bizdev literally stating their reason for not taking on simple projects was complexity, employees covertly selling use of software the company has licensed on the market) reaffirm this perception. Whereas, the stock price doesn't. May as well be dogecoin.
Definitely, chips and other components shortages are getting increasingly serious, and costs are going through the roof; voltage regulators that normally cost 50 cents have been selling for as much as $70.
There is indeed a lot of demand for chips and electronic components of all kinds but I understand that manufacturers claim is not real demand, is just a spike caused by uncertainty in international supply chains.
https://www.taipeitimes.com/News/biz/archives/2021/04/22/200...
Just like we all saw a lot of people buying toilet paper like crazy at the beginning of the pandemic, well, there is also a lot of hoarding by big tech companies that make this shortage likely to last well into 2022.
So, they're not exactly willing to expand capacity for a demand that is most likely to go away eventually.
So indeed, it is a challenging time to design and manufacture hardware; all the big players are buying it all.
The only advice I could give to anyone that is starting to design their electronics to make the design as flexible as possible, leave room on the PCB for 2 or more alternative components.
You could even try to make 2 different PCB layouts and choose one or the other based on price and availability of components, oh and try to delay components selection as much as possible, that's the most useful advice I've read https://titoma.com/blog/reduce-component-lead-time
> A bunch of recent high profile failures have caused VCs to cool on the space.
VCs have been "cool" on the hardware space since 2001.
Why fund hardware which can take 5-10 years to go somewhere when you can fund the latest social garbage fad and it will take off or fail completely within 36 months?
A successful startup allows others to claim that they’re “Uber for pets” which encourages VC money - but if there are famous explosions in the ears all of a sudden you have to explain why you’re not “Wework for chips” which makes it harder to get funding started.
When a gold rush is on nobody gets yelled at for buying shovels even if they lose money. But going against “conventional wisdom” requires either entirely independent VC or investors who really trust you.
Ye, I'm with you. To give the benefit of doubt to the author, maybe they were thinking of "turning to software innovation" as in optimizing / longer dev cycles that can result in less resource wastage. i.e. rather than needing the latest chip to run Electron based applications everywhere they'd spend a bit more time developing software that can run well on older chips.
For the integrated hardware, like IoT, cars and so on, it makes sense to attempt to reduce the amount of chips needed, and the glue the remaining bits in using software. I don't see how there's much else you can do to deal with a shortage, other than perhaps starting to remove many of the "smart" features and accept that perhaps we didn't need all of them to begin with.
For computers, desktops, laptops, servers and phone, we could start focusing on more efficient software. Replacing laptops every two or three year is no longer cost effective. We start to look at 5 to 8 year lifespan for phones and laptop, meaning that developers need to focus more on efficiency.
It's two different ways of dealing with rising chip prices: Use fewer chips, and use them longer. It's also a great way of shifting to greener computing.
10year old laptop chips work well for the vast majority of users. The problems are in shoddy cases/integrations that fail, and lack of repairability/upgradability/planned obsolescence (even in chassis that have no thin-light excuse), both of which lead to waste
> which turns components that once were hardware into a software program
this is a fun one, because there is no shortage of knobs and buttons, just chips, and knobs don't need chips to work. Even if you do need a chip for it to work (eg. rotary encoder to canbus), most of them are still available, because the "old tech" is good enough for them (compared to newer, smaller transistor sizes, needed for modern cpus to process a lot of data - eg graphics cards, modern cpus and even touchscreen radios for cars).
I stopped reading after the car bit. Anyone who's advocating more eye-requiring touchscreen controls for cars instead of less hasn't put enough thought into their opinions for anything else they've written to be worth reading.
You're talking about two different things: eliminating hardware by doing more in software (what the article was stating) vs consolidating discrete logic to an ASIC (both the IWM and the discrete logic it replaced performed the same task in hardware. The IWM just reduced it to one chip allowing them to eliminate the controller PCB)
> The IWM just reduced it to one chip allowing them to eliminate the controller PCB)
From the Wikipedia article: "The floppy drive controller was built with 8 ICs, one of which is the PROM, containing tables for the encoder and decoder, the state machine, and some code."
(My emphasis.)
What made it at all possible was moving most of the logic to the CPU:
The IWM chip, used in the prototype Mac with double sided "Twiggy" drives, and later in the original production Mac 128k with Sony 3.5" diskette drives, is basically Woz's original disk controller design (converted by Wendell Sander and Bob Bailey to NMOS), with the addition of buffer registers to allow the processor to run a bit more asynchronously. But, except for not having to run (* almost) synchronous with the disk data rate, the 68k CPU did all the raw data decoding (GCR nibbles to bytes, etc.) and head stepping in software, very similar to the 6502 code in the Apple II (DOS) RWTS.
No, IWM replaced floppy controller ASIC integrating pll, track and sector ID detectors, bit shifter, FM/MFM decoders/encoders, and CRC logic. All of this was replaced by simple bit shifter, state machine and software running on main CPU of the computer. Amiga employed similar trick reusing Blitter to encode/decode tracks.
> The central thesis is that the shortage will lead to innovation, but in my experience it has put a HUGE strain on everyone doing R&D. I suspect it'll instead lead to rising costs and stagnation as people wait for supplies. This is already basically what we have in the consumer hardware market, where last year's consoles still aren't consistently on store shelves.
I think there's some truth to creativity being enhanced by constraints. Certainly, if supplies are limited, and especially if the limits are uneven, there's going to be incentive to design around the chips that are limited, and some of that might be innovation that could be useful even after supply goes back to normal. Of course, CPU shortages are hard to design around, especially if all CPUs are in short supply; but some applications might be able to make use of different CPUs that might have more availability.
Technically, sure. But the extent of the innovation I've seen so far is people going back to using inferior chips because that was all they could get within their budget.
Microcontrollers aren't exactly interchangeable, even within the same product line. You could design for flexibility and use the arduino framework to run your code on most microchip/atmel, ST, and a million other chips but that comes at enormous cost -- to put it nicely that's an incredibly inefficient library if you're not doing anything demanding, and damn near worthless if you are. Any multiplatform framework that abstracts away the inner workings of microcontrollers is going to be too heavy to work for a huge percentage of people's power and complexity profiles.
It's not just MCUs and firmware either, any time you replace a component due to shortages you need to revalidate your design. Constantly redesigning and revalidating boards based on available stock is what people are doing right now to keep the lights on. It's hell.
If you don't need a microcontroller to do whatever you do, then sure. Pop it out and save a few bucks. But that's hardly innovation, it's more rectifying a mistake made when doing the initial design.
I think you're like 98% right. Swapping a MCU is a lot of work, and other chips are too... I just wonder how many people are going to have all the chips but one, and figure out how to wing it, and how many of those solutions end up being interestimg/useful/kept past when the missing chip becomes available.
I'm thinking of stuff like (at least some) dishwashers with 'dirt' sensing don't actually sense dirt at all; instead the pump has a thermal overload, and they sense how many times the pump cycles to indicate dirtyness.
If you used to have a dirt sensor, but it was delayed 18 months, you might figure something like that out, and maybe that's handy. Or maybe there's some other thing you'd like to measure, but there's not a good way to measure it, but it causes something to misbehave and you can measure that; but you wouldn't have thought about it except that you ran out of dirt sensors.
>> turning instead to software to handle functions that had historically been done in hardware
>That generally means doing things less efficiently. Devices will get slower and battery lifetimes shorter.
I'm personally not at all convinced hardware accelerated GUI toolkits are more power efficient than software rendered ones. Weather it's due to the abstraction or some weird idea that drawing is now "free" you end up with way more drawing being done once something is hardware accelerated and it tends to more than offset the efficiency gains. The only place it really works are video games because they're actually paying attention to where the limits are. (and they don't really care about power efficiency.)
> I'm personally not at all convinced hardware accelerated GUI toolkits are more power efficient than software rendered ones.
It really depends on what is being drawn and how. If a GUI toolkit is all flat colors, basic shapes, and doesn't use antialiasing then it can be very power efficient on modern CPUs, even low power embedded ones. If it instead uses gradients, alpha blending, antialiasing, and drawing complex shapes it's going to far less efficient all in software. The difficulty increases with increased demands for fill rate (display size x frame rate).
On modern CPUs (even embedded ones) a software rendering to a QVGA display would be no probably and likely no less efficient than hardware acceleration. However as the fill rate demand and drawing complexity increases software rendering will quickly hit a wall.
As well the argument about power management chips not benefiting from Moore’s law. Yes, but that’s been the case for decades. Thermal and current requirements demand a certain amount of silicon regardless of how small you can make features in a low-power chip.
>> turning instead to software to handle functions that had historically been done in hardware
> That generally means doing things less efficiently. Devices will get slower and battery lifetimes shorter.
Not necessarily. For example, the original idea behind RISC was too move lots of complicated functions (and instructions) from hardware into software, so that the hardware could concentrate on doing the most common and simple functionality quicker.
Those common computationally complex functions generally get abstracted away via CMSIS or the like, I've yet to beat the hardware implementations of common signal processing algorithms in software despite actively trying. Anyone trying to fill in for missing DSP hardware is going to have a very bad time.
I can't imagine it's much better for other hardware-->software transitions, given the cost of specific hardware implementations they're generally only used when actually necessary.
> turning instead to software to handle functions that had historically been done in hardware
That generally means doing things less efficiently. Devices will get slower and battery lifetimes shorter.
> There’s room for new startups and new ways of doing things. More of the car’s features are accessed using a touchscreen as opposed to a knob or button, for example, which turns components that once were hardware into a software program.
This has got to be one of the worst times to do a hardware startup. Doing R&D at an established company is already painful, between skyrocketing costs and 6 month plus lead times on standard microcontrollers (not to mention more specialized ICs). If you're just starting out instead of "only" facing trouble keeping production going you're going have to hope that the chips you're prototyping with will be available when you go into production. If things keep going the way they have this year, that's a very risky bet no matter what components you select.
> And now, in light of the chip shortage, suppliers to the automotive industry are going to be looking for more ways to turn something that used to be an electronic part into software.
Not in the automotive field but from what I've heard it's the exact opposite issue, the [lack of] processors they'd like to use in those fancy touchscreen infotainment systems are a big part of their problem.
The central thesis is that the shortage will lead to innovation, but in my experience it has put a HUGE strain on everyone doing R&D. I suspect it'll instead lead to rising costs and stagnation as people wait for supplies. This is already basically what we have in the consumer hardware market, where last year's consoles still aren't consistently on store shelves.