AI doesn't create truth, it imitates it, through assimilation of data. It is not intelligence, but it mimics it without consciousness or free will. AI can be used to manipulate, optimize engagement over truth (as we have seen with how ChatGPT is made in the image of Sam Altman and takes on his own characteristics). AI is the ultimate deceiver -- authoritative, instant, unquestioning, and without any internal moral compass. Where did the human moral compass come from? Who "trains" morality into people, and why do some people appear to be lacking it in cases that can't be explained by trauma?
Put this way, AI more fits the description of Satan than it does God in religious context. If I were an all-powerful, evil being, AI would be the tool and method I would choose to carry out my plans for humanity.
I once worked for a founder who gave me a “raise” from a W2 to a 1099 contractor. Except the raise was just not deducting any of the taxes from my pay. I was 24 and was naive.
I sued him, and won reclassification as well as two payments he never paid me, but both the state and IRS could care less that I’d been taken advantage of. They happily added their fees, interest and penalties for something I was the victim of. Years later, the debt resurfaced in the form of aggressive levies directly to my bank account after no contact for over 10 years and no collection activity. By then, the fees and interest were 3x what was “owed” to them. They actually told me it’s standard practice to wait until the debt grows and then collect on it. After so many years I didn’t even have the records from the lawsuit.
I learned that the government doesn’t care about you, especially if you’ve been scammed you have to be extra careful because that’s a signal to them you’re someone they can get even more money from. The process of disputing it will waste more of your time and mental health than it’s worth in all but the most extreme cases, and that is 100% by design.
This has got to be a joke at this point, and at worse some kind of financial scheme for Altman friends & family. What's next? Will I wake up to an announcement OpenAI is acquiring Joe Rogan's podcast?
I thought this was supposed to be the year of "focus". They just shut down one money pit (Sora) but apparently still have money to buy some random tech podcast most people have never heard of?
At this point I don't feel sorry for them, they deserve everything that's coming for them.
Agreed. It's pretty trivial to add a few images to your markdown. I had to hunt for the screenshots, which are full size entire desktop grabs for what is a web app -- odd.
Azure is easily the most expensive, least reliable and worst cloud available. It's borderline scam. An example today, I provisioned high IOPS SSDs (supposedly) and what is actually connected to the instance? A spinning hard drive! I didn't even know they were still made, but I guess Azure uses them and scams their users into thinking you're getting an SSD for $700/mo when its really an old hard drive.
I would warn anyone far and wide to avoid Azure at all costs, especially if you are a startup. And especially if you are doing any kind of AI because the only GPUs they have available are ancient and also crazy over-priced.
If I cared more, I'd try to migrate away from Azure. But I don't, and that's probably Azure's business model at this point.
I’d love to see proof of your claim that they provisioned a hard disk when you requested an SSD, or, at the very least, tests that showed that the IOPS you requested were not delivered. Can you show us the receipts?
Azure using SRE, I call BS. You don’t see underlying storage, it’s mounted as either SCSI or NVMe device as one HD. It’s obviously backed by massive fleet of drives just like EBS.
I was wrong about it being a spinning disk, ROTA=1 is just how Linux reports Azure virtual disks. But the underlying frustration stands: my home NVMe does the same copy in a fraction of the time because it can do 500K+ IOPS with no virtualization overhead. Azure caps this "Premium SSD" at 7,500 IOPS, so a small-file-heavy copy crawls at 85 MB/s despite 250 MB/s provisioned throughput. You're paying SSD prices for artificially throttled performance — the hardware may be SSD, but the performance is just awful. Paying $900/month for the highest level Premium SSD, attached to a large instance, and it's significantly slower than a $200 SSD from 5 years ago.
Sure the downside of virtualization is all disk calls are over the network which is way slower then local NVMe call. Upside is hardware failures are quickly handled.
The solution to this problem is for LLMs to get better at producing code and descriptions that doesn't look LLM generated.
It's possible to prompt and get this as well, but obviously any of the big AI companies that want to increase engagement in their coding agent, and want to capture the open source market, should come up with a way to allow the LLM to produce unique of, but still correct code so that it doesn't look LLM-generated and can evade these kinds of checks.
I wouldn't trust any of these benchmarks unless they are accompanied by some sort of proof other than "trust me bro". Also not including the parameters the models were run at (especially the other models) makes it hard to form fair comparisons. They need to publish, at minimum, the code and runner used to complete the benchmarks and logs.
Not including the Chinese models is also obviously done to make it appear like they aren't as cooked as they really are.
The problem with this is context. Whatever examples you provide compete with whatever content you want actually analyzed. If the problem is sufficiently complex, you quickly will run out of context space. You must also describe your response, in what you want. For many applications, it's better to fine-tune.
Put this way, AI more fits the description of Satan than it does God in religious context. If I were an all-powerful, evil being, AI would be the tool and method I would choose to carry out my plans for humanity.
reply