>What is the specific capability (or combination of capabilities) that people believe will remain permanently (or at least for decades) where a top medical AI cannot match or exceed the performance of a good human doctor? Let's put liability and ethics aside, let's be purely objective about it.
You cannot simply put liability and ethics aside, after all there's Hippocatic oath that's fundamental to the practice physicians.
Having said that there's always two extreme of this camp, those who hate AI and another kind of obsess with AI in medicine, we will be much better if we are in the middle aka moderate on this issue.
IMHO, the AI should be used as screening and triage tool with very high sensitivity preferably 100%, otherwise it will create "the boy who cried wolf" scenario.
For 100% sensitivity essentially we have zero false negative, but potential false positive.
The false positive however can be further checked by physician-in-a-loop for example they can look into case of CVD with potential input from the specialist for example cardiologist (or more specific cardiac electrophysiology). This can help with the very limited cardiologists available globally, compared to general population with potential heart disease or CVDs, and alarmingly low accuracy (sensitivity, specificity) of the CVD conventional screening and triage.
The current risk based like SCORE-2 screening triage for CVD with sensitivity around is only around 50% (2025 study) [3].
"The boy who cried wolf" is a story about false positives, so if that's what you want to avoid then you want to get close to 100% specificity, and accept that there are many things that the tool will not catch. If, as you propose, the tool would mainly be used to create a low confidence list of potential problems that will be further reviewed by a human, then casting a wide net and calibrating for high sensitivity instead does make sense.
The idea is to minimize the false positives "the boy who cried wolf" at the same time mitigate, or better eliminate false negatives. The main reason is that based on the physician in-the-loop, the system can be optimized for sensitivity but can be relaxed for specificity. Of course if can get both 100% sensitivity and specificity it will be great, but in life there's always a trade-off, c'est-la-vie.
In our novel ECG based CVD detection system we can get 100% sensitivity for both arrhythmia and ischemia, with inter-patient validation, not the biased intra-patient as commonly reported in literature even in some reputable conferences/journals. Specificity is still high around 90% but not yet 100% as in sensitivity but due to the physician-in-the-loop approach, which is a diagnostic requirement in the current practice of medicine, this should not be an issue.
Assume if you know for certain that AI has better senstivity and specificity than your local physician for the particular diagnosis, which likely would be the case now or in few years. Would you purposefully get inferior consultation just because of Hippocatic oath?
>I've just checked my Windows partition and there are 43 instances of sqlite dll and 16 instances of Qt5Core.dll because every program that uses those libs needs to include them in their "giant bundle of everything".
Oouch, I just got temporary headache just trying to read and comprehend the Windows mess that you mentioned here.
Why stop at K3k, should be named K3k3k in order to capture the truly recursive and nested nature of the container-in-container system?
Joking aside I think this can be a great tool in Kubernetes and container eco-system.
Unlike one of the sibling comments that claimed it's a very niche application, or 99.9% deployment will never ever use this nested feature, I beg to differ.
Apart for testing with container-in-container arrangement, it can be a killer application for realistic simulation of network elements as has been utilized in many network simulators including ComNetsEmu and others [1],[2],[3],[4].
[1] Chapter 13 - ComNetsEmu: a lightweight emulator:
It'll be great if someone can invent an accurate in-situ low-cost fake honey detector.
For a low tech pure honey detection you can mix a few drops of honey with a warm water then swirl the mixture in a bowl. If you see the appearance of seamless hexagonal pattern appearing like a honeycomb, the honey is said to be pure.
I've used this method many times and mostly works, i.e the hexagonal honeycomb pattern does appear, but the honeycomb pattern probably can appear with fake honey as well. It will be very interesting to test this rudimentary technique with fake honey for accuracy.
Fun facts, Gibraltar was named after Tariq ibn Ziyad, a famous muslim Berber commander of the Umayyad Caliphate that conquered most of the Spain and some part of French territories in the early 8th century CE [1].
Then after the conquest, came the exiled young Umayyad prince (escaping from by the later Abbasid Caliphate), who settled in Spain to create a long lasting around 800 years (that's more than European living in America now) muslim Spanish empire with its knowledge center in Toledo. This center contains many books translations and also many new books by muslim scholars. Famous books examples including Almagest Arabic translation that was copied and translated further into Latin, and studied by Copernicus and Galileo [2]. Of course they are other muslim astronomy books and ideas that Copernicus and Galileo studied and copied but never cited properly [3].
Another famous book is Muqaddimah by Ibnu-Rushd or Averroes that's widely considered as the very first work dealing with the social sciences of sociology, demography and cultural history [4].
This center was later captured in 11th century CE, and this event essentially started the Western Renaissance movement in Europe.
Legend has it, in order to motivate his troops, Tariq ordered to scuttle their entire ships armada, before advancing into Spain [5]. Perhaps some of the sinked ships are part of Tariq's original armada, but these ships were intentionally sinked not by accidents.
His act of bravery were copied and followed by later Spanish conquerers but as usual it's not been properly credited to Tariq's original efforts [6].
> 800 years (that's more than European living in America now) muslim Spanish empire with its knowledge center in Toledo
The Muslim dominion of the Iberian Peninsula did not last 800 years. The Muslim invasion started in 711 CE, and by 1085 Toledo has fallen back to the Christian kingdom of León. Granada would eventually be conquered in 1492, but most of the old Visigothic Kingdom was already in the hands of the Christians.
> This center was later captured in 11th century CE, and this event essentially started the Western Renaissance movement in Europe.
Islamic contribution within the context of European history should be both acknowledged and recognized as being autoctonous, but attributing to it things that well attested through other pathways works against it and reinforces myths historians are toiling to get rid of.
The Renaissance as we know it was kickstarted by the conquest of Constantinople in 1204 by the French and Italians, that's well documented and broadly agreed on by historians. All of this happened on the foundations laid down from the 11th c. onwards as the post-Carolingian world was stabilized.
It's not like Tariq ibn Ziyad invented the concept of intentionally making a retreat impossible in order to compel soldiers to fight. There are proverbs about this kind of thing that predate him by centuries: https://en.wiktionary.org/wiki/%E7%A0%B4%E9%87%9C%E6%B2%89%E... It's probably a popular story to tell because it raises the stakes and provides for dramatic tension: either the battle is won or the army will be annihilated. But I suspect there've been quite a few unlucky commanders who tried this, got annihilated, and never had their heroism praised in history books.
>Have you ever daydreamed about talking to someone from the past?
Fun facts, LLM was once envisioned by Steve Jobs in one of his interviews [1].
Essentially one of his main wish in life is to meet and interract with Aristotle, in which according to him at the time, computer in the future can make it possible.
[1] In 1985 Steve Jobs described a machine that would help people get answers from Aristotle–modern LLM [video]:
The idea of talking to a machine that has all of humanities knowledge and gives answers is older than electronic computing. It certainly wasn't a novel idea when Jobs gave that speech. At that time, the field of artificial intelligence was old enough to become US president.
Also, using natural language to interact with digital computers has been a research goal since the advent of interactive digital computers. AI in the 80s tried to do this with expert systems.
With the current crop of LLMs, you could argue it's now a solved problem, but the problem is nothing new.
Solved in the sense that the core idea has been realized but unsolved in the sense that it isn't the sort of safe, reliable, deterministic interaction that was commonly envisioned.
As a snake oil seller, heh, I woudn't expect something better from Jobs. A competent and true programmer/hacker like Knuth and the like would just want to talk with Archimedes -he almost did a 0.9 version of Calculus- or Euclid, far more relevant to the faulty logic and the Elements' quackery from Aristotle.
Except... not at all? The vast majority of the training data required to create an artificial Aristotle has been lost forever. Smash your coffee cup on the ground. Now reassemble it and put the coffee back in. Once you can repeatably do that I'll begin to believe you can train an artificial Aristotle.
Also none of Aristotle’s exoteric works is extant. All we have are dry, boring lecture notes. Cicero said his public works were a “golden stream of speech” and its all lost. So I don’t see how you’d build an artificial Aristotle when we don’t have any of his polished works meant for the public surviving. Plato would be a better option, since his entire exoteric corpus is extant.
Your bar is too low. With the coffee cup, you at least have access to all the pieces - in theory, although not in engineering practice. With Aristotle, you don't have anything close to that.
Recreating Aristotle in any meaningful way, other than a model trained on his surviving writing of a million or so words, is simply not possible even in principle.
That's easy! All you have to do is simulate the whole universe on a computer, and then go the point when Aristotle is lecturing. Record all his works, then ctrl-c out of that and then feed those recordings into the LLM's training data. For the coffee, you just rewind the simulation and ctrl-c and ctrl-v it at the point you want.
OK I'll raise the bar--make sure when you reassemble the coffee cup and put the coffee back into it, the coffee is the exact same temperature as when you threw the whole shooting match onto the floor ;)
EDIT: and you don't get to re-heat it.
EDIT AGAIN: to be clear, in my post above (and this one) by "put the coffee back in" I meant more precisely "put every molecule of coffee that splashed/sloshed/flowed/whatever out when the cup smashed back into the re-assembled cup" i.e. "restore the system back to the initial state". Not "refill the glued-together pieces of your shattered coffee cup with new coffee".
> I see hardware as being a thing for the second world and unlikely to stage a big comeback.
I cannot disagree more.
Actually the synergy of software and hardware (primarily due to the increasing popularity of electromagnetics EM spectrums sensing like Radar/LIDAR/mmWave/THz/etc compared to sound) will create unprecedented beyond human perception and intelligence embodied and enhanced by physical AI. Heck the EXG sensings including ECG/EMG/EEG/etc that are technically part of EM, are now generating hundreds of papers/patents/articles everyday in which this product/patent/paper by Meta and its subsidiary CTRL-labs is only the tip of the iceberg [1],[2].
Please check my other comments for more contexts.
[1] A generic non-invasive neuromotor interface for human-computer interaction (Nature article):
Not to mention the various manufacturing nationalisation initiatives by the USA, EU, etc. And while it's a scant hope after Covid, maybe American investment culture will calm down and software engineering ceases to be so overvalued.
This is my recent comments on the new RF System-on-Module (SoM) assemblies [1].
If you want to venture or pivot into RF, especially from software background this is the golden time that's made possible/feasible by software-defined radio (SDR) technology as mentioned in the OP article.
One very important thing that the article did not mention is the emerging and increasing popularity of physical AI [2]. RF can be the crucial enabler to to further enhance human limited sensing capabilities with EM based waveforms. A simple analogy is how the dog's powerful smelling capabilities is helping/enhancing human detection capability.
Rather than just training and inferencing on image based I/O, the physical AI now can feed on the much richer RF, mmWave, THz and LIDAR raw waveforms. The good news is that the latter processing of mmWave, THz and LIDAR, can be greatly enhanced by the former lower RF baseband (modulated information signals) that's not previously possible/feasible.
[1] Comments on "ADSY1100-Series: RF System-on-Module Assemblies":
>Digging a ditch strengthened the forest cover for his flanks
>The Mughal position was again fortified with a ditch and wagons linked by chains and the matchlockmen, placed in the front of the force, ‘broke the ranks of the pagan army with matchlocks and guns like their hearts’; they were black and covered with smoke. The Mughals had only about 12,000 troops at Kanua, whereas the Rajputs, allegedly, had 80,000 cavalry and 500 elephants
Digging the ditch during the battle is a typical and signature Persian war technique.
Not trying to be pedantic but the more correct word to use here is probably trench. The trench is called Khandaq in Persian and Arabic, the latter most probably a borrowed word from the former.
The main idea is to pre-emptively dig a trench beforeva battle just enough to prevent the enemies cavalry horses from jumping across.
It's succesfully used by early Islamic force against the much larger Meccan Quraish army including their allies during the famous Khandaq war in defending Yathrib (now Madinah) [1]. The idea was suggested by Salman al-Farisi, a Persian companian of Muhammad [2].
Fun facts, Mughals palace households were mainly speaking Persian language, and the Hindi/Urdu language is heavily influenced by the Persian moreso than Arab.
> Mughals palace households were mainly speaking Persian language
They actually spoke Chagatai Turkish in the households and Persian was the court language. Urdu developed independently, mostly outside the royal patronage, and much later.
You cannot simply put liability and ethics aside, after all there's Hippocatic oath that's fundamental to the practice physicians.
Having said that there's always two extreme of this camp, those who hate AI and another kind of obsess with AI in medicine, we will be much better if we are in the middle aka moderate on this issue.
IMHO, the AI should be used as screening and triage tool with very high sensitivity preferably 100%, otherwise it will create "the boy who cried wolf" scenario.
For 100% sensitivity essentially we have zero false negative, but potential false positive.
The false positive however can be further checked by physician-in-a-loop for example they can look into case of CVD with potential input from the specialist for example cardiologist (or more specific cardiac electrophysiology). This can help with the very limited cardiologists available globally, compared to general population with potential heart disease or CVDs, and alarmingly low accuracy (sensitivity, specificity) of the CVD conventional screening and triage.
The current risk based like SCORE-2 screening triage for CVD with sensitivity around is only around 50% (2025 study) [3].
[1] Hipprocatic Oath:
https://en.wikipedia.org/wiki/Hippocratic_Oath
[2] The Hippocratic Oath:
https://pmc.ncbi.nlm.nih.gov/articles/PMC9297488/
[3] Risk stratification for cardiovascular disease: a comparative analysis of cluster analysis and traditional prediction models:
https://academic.oup.com/eurjpc/advance-article/doi/10.1093/...
reply