Hacker Newsnew | past | comments | ask | show | jobs | submit | teleforce's commentslogin

>What is the specific capability (or combination of capabilities) that people believe will remain permanently (or at least for decades) where a top medical AI cannot match or exceed the performance of a good human doctor? Let's put liability and ethics aside, let's be purely objective about it.

You cannot simply put liability and ethics aside, after all there's Hippocatic oath that's fundamental to the practice physicians.

Having said that there's always two extreme of this camp, those who hate AI and another kind of obsess with AI in medicine, we will be much better if we are in the middle aka moderate on this issue.

IMHO, the AI should be used as screening and triage tool with very high sensitivity preferably 100%, otherwise it will create "the boy who cried wolf" scenario.

For 100% sensitivity essentially we have zero false negative, but potential false positive.

The false positive however can be further checked by physician-in-a-loop for example they can look into case of CVD with potential input from the specialist for example cardiologist (or more specific cardiac electrophysiology). This can help with the very limited cardiologists available globally, compared to general population with potential heart disease or CVDs, and alarmingly low accuracy (sensitivity, specificity) of the CVD conventional screening and triage.

The current risk based like SCORE-2 screening triage for CVD with sensitivity around is only around 50% (2025 study) [3].

[1] Hipprocatic Oath:

https://en.wikipedia.org/wiki/Hippocratic_Oath

[2] The Hippocratic Oath:

https://pmc.ncbi.nlm.nih.gov/articles/PMC9297488/

[3] Risk stratification for cardiovascular disease: a comparative analysis of cluster analysis and traditional prediction models:

https://academic.oup.com/eurjpc/advance-article/doi/10.1093/...


"The boy who cried wolf" is a story about false positives, so if that's what you want to avoid then you want to get close to 100% specificity, and accept that there are many things that the tool will not catch. If, as you propose, the tool would mainly be used to create a low confidence list of potential problems that will be further reviewed by a human, then casting a wide net and calibrating for high sensitivity instead does make sense.

The idea is to minimize the false positives "the boy who cried wolf" at the same time mitigate, or better eliminate false negatives. The main reason is that based on the physician in-the-loop, the system can be optimized for sensitivity but can be relaxed for specificity. Of course if can get both 100% sensitivity and specificity it will be great, but in life there's always a trade-off, c'est-la-vie.

In our novel ECG based CVD detection system we can get 100% sensitivity for both arrhythmia and ischemia, with inter-patient validation, not the biased intra-patient as commonly reported in literature even in some reputable conferences/journals. Specificity is still high around 90% but not yet 100% as in sensitivity but due to the physician-in-the-loop approach, which is a diagnostic requirement in the current practice of medicine, this should not be an issue.


Assume if you know for certain that AI has better senstivity and specificity than your local physician for the particular diagnosis, which likely would be the case now or in few years. Would you purposefully get inferior consultation just because of Hippocatic oath?

I agree. I think this is some sort of excuse to not use AI because of some vague metaphysical reason like liability.

Doctors will apply AI sooner than patient, and they can check these results with confidence.

This almost the plot of “minority report.”

I said better sensitivity and specificity. Not better accuracy.

I think this is mixing streams here.

Try narrowing the scope to remove the word 'AI' and just think 'Blood Test'.

We accept that machines can do these things faster and better than humans, and we don't lose sleep over it.

The AI will be faster and better than humans at so many things, obviously.

"Hipprocatic Oath" isn't hugely relevant to diagnosis etc.

These are systems we are measuring, that's it.

Obviously - treatment and other things, we'll need 'Hipprocatic Humans' ... but most of this is Engineering.

I don't think doctors will even trust their own judgment for many things for very long, their role will evolve as it has for a long time.


What do imperfect, biased and expensive human doctors add to the « liability and ethics » question exactly?

You can't hide behind "computer says no".

Human judgement and accountability

>I've just checked my Windows partition and there are 43 instances of sqlite dll and 16 instances of Qt5Core.dll because every program that uses those libs needs to include them in their "giant bundle of everything".

Oouch, I just got temporary headache just trying to read and comprehend the Windows mess that you mentioned here.


Why stop at K3k, should be named K3k3k in order to capture the truly recursive and nested nature of the container-in-container system?

Joking aside I think this can be a great tool in Kubernetes and container eco-system.

Unlike one of the sibling comments that claimed it's a very niche application, or 99.9% deployment will never ever use this nested feature, I beg to differ.

Apart for testing with container-in-container arrangement, it can be a killer application for realistic simulation of network elements as has been utilized in many network simulators including ComNetsEmu and others [1],[2],[3],[4].

[1] Chapter 13 - ComNetsEmu: a lightweight emulator:

https://www.sciencedirect.com/science/chapter/edited-volume/...

[2] ViPMesh: A virtual prototyping framework for IEEE 802.11s wireless mesh networks:

https://ieeexplore.ieee.org/document/7763263

[3] NestedNet: A Container-based Prototyping Tool for Hierarchical Software Defined Networks:

https://ieeexplore.ieee.org/document/9244858

[4] Network Virtualization and Emulation using Docker, OpenvSwitch and Mininet-based Link Emulation:

https://scholarworks.umass.edu/masters_theses_2/985/


It'll be great if someone can invent an accurate in-situ low-cost fake honey detector.

For a low tech pure honey detection you can mix a few drops of honey with a warm water then swirl the mixture in a bowl. If you see the appearance of seamless hexagonal pattern appearing like a honeycomb, the honey is said to be pure.

I've used this method many times and mostly works, i.e the hexagonal honeycomb pattern does appear, but the honeycomb pattern probably can appear with fake honey as well. It will be very interesting to test this rudimentary technique with fake honey for accuracy.


Fun facts, Gibraltar was named after Tariq ibn Ziyad, a famous muslim Berber commander of the Umayyad Caliphate that conquered most of the Spain and some part of French territories in the early 8th century CE [1].

Then after the conquest, came the exiled young Umayyad prince (escaping from by the later Abbasid Caliphate), who settled in Spain to create a long lasting around 800 years (that's more than European living in America now) muslim Spanish empire with its knowledge center in Toledo. This center contains many books translations and also many new books by muslim scholars. Famous books examples including Almagest Arabic translation that was copied and translated further into Latin, and studied by Copernicus and Galileo [2]. Of course they are other muslim astronomy books and ideas that Copernicus and Galileo studied and copied but never cited properly [3].

Another famous book is Muqaddimah by Ibnu-Rushd or Averroes that's widely considered as the very first work dealing with the social sciences of sociology, demography and cultural history [4].

This center was later captured in 11th century CE, and this event essentially started the Western Renaissance movement in Europe.

Legend has it, in order to motivate his troops, Tariq ordered to scuttle their entire ships armada, before advancing into Spain [5]. Perhaps some of the sinked ships are part of Tariq's original armada, but these ships were intentionally sinked not by accidents.

His act of bravery were copied and followed by later Spanish conquerers but as usual it's not been properly credited to Tariq's original efforts [6].

[1] Tariq ibn Ziyad:

https://en.wikipedia.org/wiki/Tariq_ibn_Ziyad

[2] Galileo's handwritten notes found in ancient astronomy text (42 comments):

https://news.ycombinator.com/item?id=47263938

[3] Islamic Astronomy and Copernicus [pdf]:

(https://www.tuba.gov.tr/files/yayinlar/bilim-ve-dusun/TUBA-9...)

[4] Muqaddimah of Ibnu Khaldin:

https://en.wikipedia.org/wiki/Muqaddimah

[5] The Legend of Tariq ibn Ziyad and the Burning of Ships:

https://arabic-for-nerds.com/islam/conquest-andalus/

[6] Richard A. Luecke - Scuttle Your Ships before Advancing: And Other Lessons from History.


> 800 years (that's more than European living in America now) muslim Spanish empire with its knowledge center in Toledo

The Muslim dominion of the Iberian Peninsula did not last 800 years. The Muslim invasion started in 711 CE, and by 1085 Toledo has fallen back to the Christian kingdom of León. Granada would eventually be conquered in 1492, but most of the old Visigothic Kingdom was already in the hands of the Christians.


711 AD to 1492 AD is a good 781 years.

But as I said above, that is only true for Granada, not for the rest of what it would become Spain.

> This center was later captured in 11th century CE, and this event essentially started the Western Renaissance movement in Europe.

Islamic contribution within the context of European history should be both acknowledged and recognized as being autoctonous, but attributing to it things that well attested through other pathways works against it and reinforces myths historians are toiling to get rid of.

The Renaissance as we know it was kickstarted by the conquest of Constantinople in 1204 by the French and Italians, that's well documented and broadly agreed on by historians. All of this happened on the foundations laid down from the 11th c. onwards as the post-Carolingian world was stabilized.


It's not like Tariq ibn Ziyad invented the concept of intentionally making a retreat impossible in order to compel soldiers to fight. There are proverbs about this kind of thing that predate him by centuries: https://en.wiktionary.org/wiki/%E7%A0%B4%E9%87%9C%E6%B2%89%E... It's probably a popular story to tell because it raises the stakes and provides for dramatic tension: either the battle is won or the army will be annihilated. But I suspect there've been quite a few unlucky commanders who tried this, got annihilated, and never had their heroism praised in history books.

You can use this technique during job interviews by bringing your own padlock and employment contract, and a rope just in case.

Why a pad lock and rope? Are you hoping to lock yourself in?

>Have you ever daydreamed about talking to someone from the past?

Fun facts, LLM was once envisioned by Steve Jobs in one of his interviews [1].

Essentially one of his main wish in life is to meet and interract with Aristotle, in which according to him at the time, computer in the future can make it possible.

[1] In 1985 Steve Jobs described a machine that would help people get answers from Aristotle–modern LLM [video]:

https://youtu.be/yolkEfuUaGs


The idea of talking to a machine that has all of humanities knowledge and gives answers is older than electronic computing. It certainly wasn't a novel idea when Jobs gave that speech. At that time, the field of artificial intelligence was old enough to become US president.

Also, using natural language to interact with digital computers has been a research goal since the advent of interactive digital computers. AI in the 80s tried to do this with expert systems.

With the current crop of LLMs, you could argue it's now a solved problem, but the problem is nothing new.


Solved in the sense that the core idea has been realized but unsolved in the sense that it isn't the sort of safe, reliable, deterministic interaction that was commonly envisioned.

>Aristotle

As a snake oil seller, heh, I woudn't expect something better from Jobs. A competent and true programmer/hacker like Knuth and the like would just want to talk with Archimedes -he almost did a 0.9 version of Calculus- or Euclid, far more relevant to the faulty logic and the Elements' quackery from Aristotle.


Except... not at all? The vast majority of the training data required to create an artificial Aristotle has been lost forever. Smash your coffee cup on the ground. Now reassemble it and put the coffee back in. Once you can repeatably do that I'll begin to believe you can train an artificial Aristotle.

Also none of Aristotle’s exoteric works is extant. All we have are dry, boring lecture notes. Cicero said his public works were a “golden stream of speech” and its all lost. So I don’t see how you’d build an artificial Aristotle when we don’t have any of his polished works meant for the public surviving. Plato would be a better option, since his entire exoteric corpus is extant.

Your bar is too low. With the coffee cup, you at least have access to all the pieces - in theory, although not in engineering practice. With Aristotle, you don't have anything close to that.

Recreating Aristotle in any meaningful way, other than a model trained on his surviving writing of a million or so words, is simply not possible even in principle.


That's easy! All you have to do is simulate the whole universe on a computer, and then go the point when Aristotle is lecturing. Record all his works, then ctrl-c out of that and then feed those recordings into the LLM's training data. For the coffee, you just rewind the simulation and ctrl-c and ctrl-v it at the point you want.

> simulate the whole universe on a computer

Of course in principle that computer only has to be 1.x times larger than the universe, where x > 0. Perhaps AWS can sell you the compute.


Fuck why didn't I think of that all those other times I fucked up in my life. Ctrl-z woulda done it every goddamn time.

OK I'll raise the bar--make sure when you reassemble the coffee cup and put the coffee back into it, the coffee is the exact same temperature as when you threw the whole shooting match onto the floor ;)

EDIT: and you don't get to re-heat it.

EDIT AGAIN: to be clear, in my post above (and this one) by "put the coffee back in" I meant more precisely "put every molecule of coffee that splashed/sloshed/flowed/whatever out when the cup smashed back into the re-assembled cup" i.e. "restore the system back to the initial state". Not "refill the glued-together pieces of your shattered coffee cup with new coffee".


Ah ok sorry, so you want them to fully reverse entropy. I agree that bar is high enough.

Yeah I think if you could pull off a trick like that you could probably recover the necessary training data ;)

Imagine aiming for Aristotle and landing on Siri…

> I see hardware as being a thing for the second world and unlikely to stage a big comeback.

I cannot disagree more.

Actually the synergy of software and hardware (primarily due to the increasing popularity of electromagnetics EM spectrums sensing like Radar/LIDAR/mmWave/THz/etc compared to sound) will create unprecedented beyond human perception and intelligence embodied and enhanced by physical AI. Heck the EXG sensings including ECG/EMG/EEG/etc that are technically part of EM, are now generating hundreds of papers/patents/articles everyday in which this product/patent/paper by Meta and its subsidiary CTRL-labs is only the tip of the iceberg [1],[2].

Please check my other comments for more contexts.

[1] A generic non-invasive neuromotor interface for human-computer interaction (Nature article):

https://www.nature.com/articles/s41586-025-09255-w

[2] Meta Ray-Ban Display (2025 - 962 comments):

https://news.ycombinator.com/item?id=45283306


Not to mention the various manufacturing nationalisation initiatives by the USA, EU, etc. And while it's a scant hope after Covid, maybe American investment culture will calm down and software engineering ceases to be so overvalued.

This is my recent comments on the new RF System-on-Module (SoM) assemblies [1].

If you want to venture or pivot into RF, especially from software background this is the golden time that's made possible/feasible by software-defined radio (SDR) technology as mentioned in the OP article.

One very important thing that the article did not mention is the emerging and increasing popularity of physical AI [2]. RF can be the crucial enabler to to further enhance human limited sensing capabilities with EM based waveforms. A simple analogy is how the dog's powerful smelling capabilities is helping/enhancing human detection capability.

Rather than just training and inferencing on image based I/O, the physical AI now can feed on the much richer RF, mmWave, THz and LIDAR raw waveforms. The good news is that the latter processing of mmWave, THz and LIDAR, can be greatly enhanced by the former lower RF baseband (modulated information signals) that's not previously possible/feasible.

[1] Comments on "ADSY1100-Series: RF System-on-Module Assemblies":

https://news.ycombinator.com/item?id=47821336

[2] What is physical AI?

https://www.ibm.com/think/topics/physical-ai


I've the BTwin Ultra Compact by Decathlon and I'd recommend it as alternative to the popular Bromptons [1].

It cost less than half of the equivalent Bromptons bike that's featured in the article.

[1] BTwin Ultra Compact 1 Second Light:

https://road.cc/content/review/btwin-ultra-compact-1-second-...


>Digging a ditch strengthened the forest cover for his flanks

>The Mughal position was again fortified with a ditch and wagons linked by chains and the matchlockmen, placed in the front of the force, ‘broke the ranks of the pagan army with matchlocks and guns like their hearts’; they were black and covered with smoke. The Mughals had only about 12,000 troops at Kanua, whereas the Rajputs, allegedly, had 80,000 cavalry and 500 elephants

Digging the ditch during the battle is a typical and signature Persian war technique.

Not trying to be pedantic but the more correct word to use here is probably trench. The trench is called Khandaq in Persian and Arabic, the latter most probably a borrowed word from the former.

The main idea is to pre-emptively dig a trench beforeva battle just enough to prevent the enemies cavalry horses from jumping across.

It's succesfully used by early Islamic force against the much larger Meccan Quraish army including their allies during the famous Khandaq war in defending Yathrib (now Madinah) [1]. The idea was suggested by Salman al-Farisi, a Persian companian of Muhammad [2].

Fun facts, Mughals palace households were mainly speaking Persian language, and the Hindi/Urdu language is heavily influenced by the Persian moreso than Arab.

[1] Battle of the Trench:

https://en.wikipedia.org/wiki/Battle_of_the_Trench

[2] Salman the Persian

https://en.wikipedia.org/wiki/Salman_the_Persian


> Mughals palace households were mainly speaking Persian language

They actually spoke Chagatai Turkish in the households and Persian was the court language. Urdu developed independently, mostly outside the royal patronage, and much later.


Thanks for the wonderful Wikipedia excursion I just enjoyed, I learned a lot.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: