Hacker Newsnew | past | comments | ask | show | jobs | submit | ant6n's commentslogin

> In ancient times, floating point numbers were stored in 32 bits.

I thought in ancient times, floating point numbers used to be 80 bit. They lived in a funky mini stack on the coprocessor (x87). Then one day, somebody came along and standardized those 32 and 64 bit floats we still have today.


I was going to reply that just because intel did something funny doesn't mean that it was the beginning of the story. but it turns out that the release of the 8087 predates the ratification of IEEE floats by 2 years. in addition, the primary numeric designer for the 8087 was apparently Kahan, which means that they were both part of the same design process. of course there were other formats predating both of these

The Intel 8087 design team, with Kahan as their consultant, who was the author of most novel features, based on his experience with the design of the HP scientific calculators, have realized that instead of keeping their new much improved floating-point format as proprietary it would be much better to agree with the entire industry on a common floating-point standard.

So Intel has initiated the discussions for the future IEEE standard with many relevant companies, even before the launch of 8087. AMD was a company convinced immediately by Intel, so AMD was able to introduce a FP accelerator (Am9512) based on the 8087 FP formats, which were later adopted in IEEE 754, also in 1980 and a few months before the launch of Intel 8087. So in 1980 there already were 2 implementations of the future IEEE 754 standard. Am9512 was licensed to Intel and Intel made it using the 8232 part number (it was used in 8080/8085/Z80 systems).

Unlike AMD, the traditional computer companies agreed that a FP standard is needed to solve the mess of many incompatible FP formats, but they thought that the Kahan-Intel proposal would be too expensive for them, so they came with a couple of counter-proposals, based on the tradition of giving priority to implementation costs over usefulness for computer users.

Fortunately the Intel negotiators eventually succeeded to convince the others to adopt the Intel proposal, by explaining how the new features can be implemented at an acceptable cost.

The story of IEEE 754 is one of the rare stories in standardization where it was chosen to do what is best for customers, not what is best for vendors.

Like the use of encryption in communications, the use of the IEEE standard has been under continuous attacks during its history, coming from each new generation of logic designers, who think that they are smarter than their predecessors, and who are lazy to implement properly some features of the standard, despite the fact that older designs have demonstrated that they can in fact be implemented efficiently, but the newbies think that they should take the easy path and implement inefficiently some features of the standard, because supposedly the users will not care about that.


The floating point "standard" was basically codifying multiple different vendor implementations of the same idea. Hence the mess that floating point is not consistent across implementations.

IEEE 754 basically had three major proposals that were considered for standardization. There was the "KCS draft" (Kahan, Coonen, Stone), which was the draft implemented for the x87 coprocessor. There was DEC's counter proposal (aka the PS draft, for Payne and Strecker), and HP's counter proposal (aka, the FW draft for Fraley and Walther). Ultimately, it was the KCS draft that won out and become what we now know as IEEE 754.

One of the striking things, though, is just how radically different KCS was. By the time IEEE 754 forms, there is a basic commonality of how floating-point numbers work. Most systems have a single-precision and double-precision form, and many have an additional extended-precision form. These formats are usually radix-2, with a sign bit, a biased exponent, and an integer mantissa, and several implementations had hit on the implicit integer bit representation. (See http://www.quadibloc.com/comp/cp0201.htm for a tour of several pre-IEEE 754 floating-point formats). What KCS did that was really new was add denormals, and this was very controversial. I also think that support for infinities was introduced with KCS, although there were more precedents for the existence of NaN-like values. I'm also pretty sure that sticky bits as opposed to trapping for exceptions was considered innovative. (See, e.g., https://ethw-images.s3.us-east-va.perf.cloud.ovh.us/ieee/f/f... for a discussion of the differences between the early drafts.)

Now, once IEEE 754 came out, pretty much every subsequent implementation of floating-point has started from the IEEE 754 standard. But it was definitely not a codification of existing behavior when it came out, given the number of innovations that it had!


That is merely medieval times.

In ancient times, floats were all 60 bits and there was no single precision.

See page 3-15 of this https://caltss.computerhistory.org/archive/6400-cdc.pdf


I see their 60-bit float has the same size exponent (11 bits) as today's doubles. Only the mantissa was smaller, 48 bits instead of 52.

That written document is prehistoric.

By definition, a document that is written is historic, not prehistoric.

Prehistoric information could be preserved by an oral tradition, until it is recorded in some documents (like the Oral Histories at the Computer History Museum site).


80 bits is just in the processor. Thats why you might a little bit different result, depending how you calculated first and maybe stored something in the RAM

Intel 8087, which has introduced in 1980 the 80-bit extended floating point format, could store and load 80-bit numbers, avoiding any alterations caused by conversions to less precise formats.

To be able to use the corresponding 8087 instructions, "long double" has been added to the C language, so to avoid extra roundings one had to use "long double" variables and one had to also be careful so that intermediate values used for the computing of an expression will not be spilled into the memory as "double".

However this became broken in some newer C compilers, where due to the deprecation of the x87 ISA "long double" was made synonymous to "double". Some better C compilers have chosen to implement "long double" as quadruple-precision instead of extended precision, which ensures that no precision is lost, but which may be slow on most computers, where no hardware support for FP128 exists.


x87 always had a choice of 32/64/80-bit user-facing floats. It just operated internally on 80 bits.

You can set x87 to round each operation result to 32-bit or 64-bit.

With this setting in operates internally exactly on those sizes.

Operating internally on 80-bits is just the default setting, because it is the best for naive users, who are otherwise prone to computing erroneous results.

This is the same reason why the C language has made "double" the default precision in constants and intermediate values.

Unless you do graphics or ML/AI, single-precision computations are really only for experts who can analyze the algorithm and guarantee that it is correct.


I thought it would be to get better, to stay competitive with the competitors and free models.

Well actually, your not well actually to the well actually was actually a well actually of the well actually. Just sayin’.

I dunno. I wish as a start, MacOS would improve window management, so I don’t keep jumping around the various desktops all the time and get lost.


Going to orbit is actually useful already, cf starlink


Yeah, I meant in addition to what we’re already doing.

I do think that will reach diminishing returns at some point. Kessler syndrome is a real thing for long-term higher orbits.


Biggest blocker: I can’t create reliable excel sheets that potential investors can look at in MS Excel, formulas tend to break.

If I can’t share the spreadsheet, it’s not very useful.


The joke is that if you fill in the center, it shows the Droste effect of the image and kind of diminishes the magic of it.


Wait, is the shooter in “insider” in this case?


How heavy is the micro black hole? How do you “drag” it?


Well, in a crunch I wouldn't like to be caught without a secondary backup.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: