Hacker Newsnew | past | comments | ask | show | jobs | submit | ChrisGreenHeur's commentslogin

one of the more interesting things to think about is the big push to rendering all window manager stuff through a gpu, because we were sure we needed drop shadows and geometry transforms for windows....

Now, what we actually do in a window manager could easily be done in software in realtime, just farmed out to some cpu core.


> because we were sure we needed drop shadows and geometry transforms for windows

As screens get larger, the amount of pixels you need to push to composite windows gets larger-squared. It makes sense to move the pixel pushing away from the CPU and more importantly away from CPU-RAM and on to a separate RAM bus.

The "single buffer with invalidation" model of Win16 (I cannot remember how it works in X) saves memory at the cost of more redraws. The composition model allows you to do things like drag window A over window B without forcing a repaint of window B every frame.

It also allows for better process isolation. I think in both Win16 and X11 you could just get a handle to the "root window" and draw wherever you wanted?


> The "single buffer with invalidation" model of Win16 (I cannot remember how it works in X)

Same way, they both come from Macintosh (which, if i remember the apocrypha correctly, was Bill Atkinson's idea based on what he thought Xerox Smalltalk was doing even if it turned out it wasn't working like that).



eh, there is nothing a gpu can do here within the concept of composition that a cpu could not also do. the gpu simply has buffers that it compsits, the cpu can do that as well. with the benefit of less complexity leading to not needing to worry about driver crashes. on sane architectures its all the same ram anyway

> eh, there is nothing a gpu can do here within the concept of composition that a cpu could not also do.

True, but which is more efficient?

> on sane architectures its all the same ram anyway

Opinions differ. The main benefit of splitting RAM is not having to share the bus. As I said, this lets you use the CPU for CPU things without having to spend precious DRAM bandwidth shovelling pixels.


if you can't get ai to handle git, that's certainly a skill issue

Presumably, randomness and only looking at a limited subset will semi-ensure over time that most contradictions will surface. Alternatively, how large do you really expect this kind of thing to be, there is a limit to the amount of facts from Warhammer 40k worth saving in a wiki.

The article is not on training LLMs. it is about using LLMs to write a wiki for personal use. The article assumes a fully trained LLM such as ChatGPT or Claude already exists to be used.

Don't even try, after vibe coding, people seem to be adopting vibe thinking. "Model Collapse sounds cool, I'm gonna use it without looking up"

Vibe thinking... that's an interesting premise. I'll have to build up my new llm-wiki before I'll know what to think about "vibe thinking."

I was joking but also not joking, this llm-wiki idea is fun. I fed into it it's own llm-wiki.md, Foucault's Pendulum, randomly collected published papers about the philosophy of GiTS, several CCRU essays, and Manufacturing Consent. It drew fun red yarn between all of them, about the topic of red yarn (e.g. schizos drawing connections out of nothing, particularly through the use of computers, and how this relates to itself doing literally this as it does it.)

I'll spare you most of the slop but.. "The Case That I Am Abulafia: The parallel is uncomfortable and precise. [...]"

Yeah... It's fun though.


Also, TFA prescribes putting ground truth source files into a /raw directory.

Everything is derived from them and backlinks into them. Which is necessary to be vigilant about staleness, correctness, drift, and more. Just like in a human-built knowledge base.


If you let in 100 and then throw out 90 you have thrown out many more than if you let in 10 but throw out 30. But the end result is better.

it could be possible that llms can mak great use of them


> it could be possible that llms can mak great use of them

This is actually a good point. Yes, LLMs have saturated the conversation everywhere but contracts help clarify the pre-post conditions of methods well. I don't know how good the implementation in C++ will be but LLMs should be able to really exploit them well.


The problem with that is that C++26 Contracts are just glorified asserts. They trigger at runtime, not compile time. So if your LLM-generated code would have worked 99% of the time and then crashed in the field... well, now it will work 99% of the time and (if you're lucky) call the contract-violation handler in the field.

Arguably that's better (more predictable misbehavior) than the status quo. But it's not remotely going to fix the problem with LLM-generated code, which is that you can't trust it to behave correctly in the corner cases. Contracts can't make the code magically behave better; all they can do is make it misbehave better.


In my experience, llms don't reason well about expected states, contracts, invariants, etc. Partly because that don't have long term memory and are often forced to reason about code in isolation. Maybe this means all invariants should go into AGENTS.md/CLAUDE.md files, or into doc strings so a new human reader will quickly understand assumptions.

Regardless, I think a habit of putting contracts to make pre- and post-conditions clear could help an AI reason about code.

Maybe instead of suggesting a patch to cover up a symptom, an AI may reason that a post-condition somewhere was violated, and will dig towards the root cause.

This applies just as well to asserts, too. Contracts/asserts actually need to be added to tell a reader something.


Quake 2 runs a bit iffy on an O2, but runs fine on PC with a Voodoo 1


I wonder if it's optimized for it. Quick! Someone send Fabien an O2 before he finishes his Quake book!


I see the alternate reality like so:

SGI creates a low power cpu for Apple to use in portable devices, eventually in desktops and laptops (no Arm).

And either: SGI launches low budget PC with playstation 1 level 3d graphics as soon as they could compete with win3.1/95, running Irix. Or: A few years after that SGI launches what is essentially the Voodoo 2.

Any way you look at it the only possible future for SGI was low cost mass market devices. Just a matter of picking which one, they picked none.


Yes.. some interesting thoughts there, MIPS in my pocket: hell yeah.

The crazy thing is, SGI did have internal research projects to do such things .. they had engineers working on porting Netscape to the N64, which could very well have served as the basis for a more interesting consumer-end mass market device. Imagine if someone at SGI had put a cell modem in the mix somehow, yikes.

Well, its all a dream. Meanwhile I still have all my SGI gear, and I'm not afraid to admit I've been looking at 3DFX Voodoo cards on EBay a little more than I should have today ..


>Yes.. some interesting thoughts there, MIPS in my pocket: hell yeah.

The PSP, and twice, as it had an r3k interpreter/loader for PSX games.

Also, you can call me crazy, but I played Nethack under the PSP with the CFW mod setting the clock from 222MHZ to 50MHZ lasting the battery a few hours more...

The GCWZero was a MIPS console too, and pcsx-rearmed had optimisations for that too.


> The GCWZero was a MIPS console too

There have been a couple of GCWZero clones made in more recent years (e.g. from Anbernic) running the same (or a derivative) Linux-based OS with JZ4770 MIPS SoC and software compatibility. Too bad Ingenic never released any successor to the SoC though.


The closest would be the PSP with NetBSD and custom firmware with libre code. Same family in the end.


N64 was kinda that.


It's not possible to know something without believing it to be true. https://en.wikipedia.org/wiki/Belief#/media/File:Classical_d...


This is objectively wrong. If that was the case every scientist performing a test would have always had their expectations and beliefs proven true. If you're trying to disprove something also because you believe it to be wrong you would never be proven wrong.


re-read your post - it's just a bunch of nonsense, no actual reasoning in there


Science reduced to people with a phd?


not a bad first order filter.

can you think of a better one?


The whole point of the scientific method was that we could ignore the source of the information, and were instead expected to focus on the value of the information based on supporting evidence (data).

If we go back to "Only people that have been inducted into the community can publish science" we're effectively saying that only the high priests can accrue knowledge.

I say this knowing full well that we have a massive problem in science on sorting the wheat from the chaff, have had so for a VERY long time, and AI is flooding the zone (thank you political commentator I despise) with absolute dross.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: