Hacker Newsnew | past | comments | ask | show | jobs | submit | nsteel's commentslogin

Also to cheaply (area) create multi-port RAMs.

How does that work?

It's similar to RAID schemes but instead of drive failure it's port unavailability. There's a reference at [1] or an FPGA-centric one at [2], but it applies to anywhere where dual/single-port rams are readily available but anything more exotic isn't.

  [1] Achieving Multi-Port Memory Performance on Single-Port Memory with Coding Techniques - https://arxiv.org/abs/2001.09599
  [2] https://people.csail.mit.edu/ml/pubs/fpga12_xor.pdf

And and Broadcom designs a huge part of the chip. They take Google's (mostly) logical design and providing everything TSMC need to physically make the chip (including imports g IP such as serdes, PLLs, and test).

I disagree. We've produced numerous complex chips with VHDL over the last 30 years. Most of the vendor models we have to integrate with are Verilog, so perhaps it is more popular, but that's no problem for us. We've found plenty of bugs for both VHDL and Verilog in the commercial tooling we use, neither is particularly worse (providing you're happy to steer clear of the more recent VHDL language features).


Except they don't use DDR5. LPDDR5 is always soldered. LPDDR5 requires short point-to-point connections to give you good SI at high speeds and low voltages. To get the same with DDR5 DIMMs, you'd have something physically much bigger, with way worse SI, with higher power, and with higher latency. That would be a much worse solution. GDDR is much higher power, the solution would end up bigger. Plus it's useless for system memory so now you need two memory types. LPDDR5 is the only sensible choice.


> LPDDR5 is always soldered.

No it isn't:

https://www.newegg.com/crucial-32gb-ddr5-7500-cas-latency-cl...

CAMM2 is new and most of the PC companies aren't using it yet but it's exactly the sort of thing Apple used to be an early adopter of when they wanted to be.


OK, but that's significantly slower and larger. It's a worse solution. Am I missing something?


It looks like LPCAMM2 is shipping from one vendor and only started shipping in October- that’s a bit quick and early for Apple to adopt.


It wasn't too quick and early for Dell and Lenovo to adopt.

It's the new thing. All three of the DRAM manufacturers intend to produce it:

https://www.digitimes.com/news/a20240916PD207/samsung-lpcamm...

And it's called "CAMM2" because it's not even the first version. Apple could have been working with the other OEMs on this since 2022 and been among the first to adopt it instead of the last:

https://www.techpowerup.com/294240/dells-ddr5-camm-appears-i...


Is it really useless for system memory or is it just too expensive and no manufacturer has bothered?


> the easiest part of the system to get changed... the core routers and associated infrastructure.

Is that really the easy bit to change? ISPs spend years trialling new hardware and software in their core. You go through numerous cheapo home routers over the lifetime of one of their chassis. You'll use whatever non-name box they send you, and you'll accept their regular OTA updates too, else you're on your own.


> Is that really the easy bit to change?

When you're adding support for a new Internet address protocol that's widely agreed to be the new one, it absolutely is. Compared to what end-users get, ISPs buy very high quality gear. The rate of gear change may be lower than at end-user sites but because they're paying far, far more for the equipment, it's very likely to have support for the new addressing protocol.

Consumer gear is often cheap-as-possible garbage that has had as little effort put into it as possible. [0] I know that long after 2012, you could find consumer-grade networking equipment that did not support (or actively broke) IPv6. [1] And how often do we hear complaints of "my ISP-provided router is just unreliable trash, I hate it", or stories of people saving lots of money by refusing to rent their edge router from their ISP? The equipment ISPs give you can also be bottom-of-the-barrel crap that folks actively avoid using. [2]

So, yeah, the stuff at the very edge is often bottom-of-the-barrel trash and is often infrequently updated. That's why it's harder to update the equipment at edge than the equipment in the core. It is way more expensive to update the core stuff, but it's always getting updated, and you're paying enough to get much better quality than the stuff at the edge.

[0] OpenWRT is so, so popular for a reason, after all.

[1] This was true even for "prosumer" gear. I know that even in the mid 2010s, Ubiquiti's UniFi APs broke IPv6 for attached clients if you were using VLANs. So, yeah, not even SOHO gear is expensive enough to ensure that this stuff gets done right.

[2] You do have something of a point in the implied claim that ISPs will update their customer rental hardware with IPv6 support once they start providing IPv6 to their customer. But. Way back when I was so foolish as to rent my cable modem, I learned that I'd been getting a small fraction of the speed available to me for years because my cable modem was significantly out of date. It required a lucky realization during a support call to get that update done. So, equipment upgrades sometimes totally fall through the cracks even with major ISPs.


> but it's always getting updated,

I entirely disagree. Due to a combination of ISPs sticking with what they know and refusing to update (because of the huge time/cost in validating it), and vendors minimising their workloads/risk exposure and only updating what they "have to". The vendors have a lot of power here and these big new protocols are just more work.

In addition, smaller ISPs have virtually no say in what software/features they get. They can ask all they want, they have little power. It takes a big customer to move the needle and get new features into these expensive boxes. It really only happens when there's another vendor offering something new, and therefore a business requirement to maintain feature parity else lose big-customer revenue. So yeh, if a new protocol magically becomes standard, only then would anyone bother implementing and supporting it.

I think it's much easier to update consumer edge equipment. The ISP dictates all aspects of this relationship, the boxes are cheap, and just plug and play. They're relatively simple and easy to validate for 99% of usecases. If your internet stops working (because you didn't get the new hw/sw), they ship you a replacement, 2 days later it's fixed.

But I will just say, and slightly off topic of this thread, the lack of multiple extension headers in this proposed protocol instantly makes it more attractive to implement compared to v6.


> I entirely disagree. Due to a combination of ISPs sticking with what they know and refusing to update... and vendors minimising their workloads/risk exposure and only updating what they "have to"...

You misunderstand me, though the misunderstanding is quite understandable given how I phrased some of the things.

I expect the updating usually occurs when buying new kit, rather than on kit that's deployed... and that that purchasing happens regularly, but infrequently. I'm a very, very big proponent of "If it's working fine, don't update its software load unless it fixes a security issue that's actually a concern.". New software often brings new trouble, and that's why cautious folks do extensive validation of new software.

My commentary presupposed that

  [Y]ou're adding support for a new Internet address protocol that's widely agreed to be *the* new one
which I'd say counts as something that a vendor "has to" implement.

> I think it's much easier to update consumer edge equipment. The ISP dictates all aspects of this relationship...

I expect enough people don't use the ISP-rented equipment that it's -in aggregate- actually not much easier to update edge equipment. That's what I was trying to get at with talking about "ISP-provided routers & etc are crap and not worth the expense".


On the other hand, consumer routers route in software, which is easily updated. Core routers with multi-terabit-per-second connections use specialized ASICs to handle all that traffic, which can never be updated.


> On the other hand, consumer routers route in software, which is easily updated.

Sure. On the other other hand, companies going "Is this a security problem that's going to cost us lots of money if we don't fix it? No? Why the fuck should I spend money fixing it for free, then? It can be a headline feature in the new model." means that -in practice- they aren't so easily updated.

If everyone in the consumer space made OpenWRT-compatible routers, switches, and APs, then that problem would be solved. But -for some reason- they do not and we still get shit like [0].

[0] <https://www.youtube.com/watch?v=KsiuA5gOl1o>


> consumer routers route in software, which is easily updated

You must have had much better experiences with firmware update policies for embedded consumer devices than me.


Implementing DDR3 training for our packet queuing chip (custom memory controller) was my first project at work. We had originally hoped to use the same training params for all parts. That wasn't reliable even over a small number of testing systems in the chamber. DDR3 RAM parts were super cheap compared to what we had used in previous generations, and you get what you pay for with a huge amount of device variation. So we implemented a relatively long training process to be run on each device during our board testing, and saved those per-lane skews. But we found the effects of temperature, and particularly system noise, were too great once the system was sending full-rate traffic. (The training had to be done one interface at a time, with pedestrian data-rates). We then ended up with a quick re-training pass to re-center the eyes. It still wasn't perfect - slower ram chips (with smaller eyes) would report ECC correctables when all interfaces were doing worst-case patterns at temperature extremes. We spent a lot of time making those interfaces robust, and ended up relying more on ECC than we had intended. But those chips have been shipping ever since and will have seen traffic from most of us.


You played in hard mode in a weird sense; more modern DDR versions are in a backwards sense "easier" if you're buying the IP, because a lot of the training has moved to boot time and is handled by the vendor IP rather than needing to be run during burn-in using some proprietary toolkit or self-test tool.

It's just as arcane and weird, but if you buy one of the popular modern packages for DDR4/5 like DesignWare, more and more training is accomplished using opaque blob firmware (often ARC) loaded into an embedded calibration processor in the DDR controller itself at boot time rather than constants trained by your tooling or the vendor's.


Wow, I was spoiled building firmware for my ARM boards then (building, not developing).

Marvell has a source available DDR driver that actually takes care of training on a few of their platforms! https://github.com/MarvellEmbeddedProcessors/mv-ddr-marvell


I don't know if this is still the case, but back then the likes of Synopsys charged a lot of money for what was very limited controller functionality; you were stuck with their frustrating support channels and generally dumpster fire firmware. Our controller was fully custom to our needs, supporting more optimum refresh schemes tightly integrated with our application, and multiple memory protocols (not just DDR3), and I don't remember what else.

At least we were able to modify the training algorithms and find the improvements, rather than being stuck with the usual vendor "works for us" response. Especially with something like commodity DDR, where our quantities don't command much clout. But it was a bit of an ordeal and may have contributed to us buying in a controller for our next gen (not DDRx). But I think we're going the other way again after that experience..!


Doesn't that M1 Air have a much, much slower CPU?


https://news.ycombinator.com/item?id=47256032

M1 Air single core is 2347 and multicore is 8342. Plus, the A18 pro chip in the Neo will likely perform better with the improved thermal environment.


Agree entirely with your take. The packaging story is awesome, I wish there were more details on the stacking used on this one.

But I am at a loss to how Intel are really going to get any traction with IFS. How can anyone trust Intel as a long-term foundry partner. Even if they priced it more aggressively, the opportunity cost in picking a supplier who decides to quit next year would be catastrophic for many. The only way this works is if they practically give their services away to someone big, who can afford to take that risk and can also make it worth Intel's continued investment. Any ideas who that would be, I've got nothing.


I suspect that timing might help Intel here, with so much of the better established foundries near fully allocated for the next two years, it may be more a question of availability than brand name risk. And for whatever problems Intel has, it's pretty unlikely they'd go completely under and disolve in less than a year. Good non completion clauses in the contracts can mitigate a good chunk of the remaining risk.

Not to mention potential customers who would prefer a US based foundry regardless. My guess is that there's a pretty large part of the market that would be perfectly fine with using Intel.


> How can anyone trust Intel as a long-term foundry partner

With the standard form of business trust: a contract.


Worthless. Just looks how IFS worked out the previous two times they gave it a go. If you're not in the industry you may not even be aware it was a thing. And then not. Twice.


And how many times did Intel get sued for breach of contract over changing their mind? If they have a contract, they'll honor it or compensate.


And that's why it's got to be a big company that takes this on, you need deep pockets to successfully sue a company like Intel. It's not realistic for most. Plus the huge opportunity cost of missing your market and wasting years having to start over. Again, a bigger company can survive that with multiple projects in parallel.


> It’s a platform that sucks artists for everything they have, it actively prevents community building

The API used to be great. It's slowly being crippled, presumably a PR/face-saving slow-death play.

- https://community.spotify.com/t5/Spotify-for-Developers/Febr...

- https://community.spotify.com/t5/Spotify-for-Developers/Chan...

- https://community.spotify.com/t5/Spotify-for-Developers/Unab...


Japan is a terrible example for you, they are focused on ditching the US.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: