The only way you get the capital to train and host models at this scale is if you have a story to sell to investors that ends with "and then we become an economic chokepoint and extract rents from everyone else".
I agree that this dream of huge returns is luring investors.
I don't think that it will actually work that way. The barriers to making a useful model appear to be modest and keep getting lower. There are a lot of tasks where some AI is useful, but you don't need the very best model if there's a "good enough" solution available at lower prices.
I believe that the irrational exuberance of AI investors is effectively subsidizing technological R&D in this area before AI company valuations drop to realistic levels. Even if OpenAI ends up being analogous to Yahoo! (a currently non-sexy company that was once a darling of investors), their former researchers and engineers can circulate whatever they learned on the job to the organizations that they join later.
I think that we're getting there. I put together a workstation in early 2023 with a single 4090 GPU. I did it to run things like BERT and YOLO image classifiers. At that point the only "open weights" LLM was the original Llama from Meta, and even that was open-weights only because it was leaked. It was a very weak model by today's standards.
With the same hardware I now get genuine utility out of models like Qwen 3.5 for categorizing and extracting unstructured data sources. I don't use small local models for coding since frontier models are so much stronger, but if I had to go back to small models for coding too they would be more useful than anything commercially available as recently as 4 years ago.
In short, the ML industry is creating the conditions under which anyone with sufficient funds can train an unaligned model. Rather than raise the bar against malicious AI, ML companies have lowered it.
This is true, and I believe that the "sufficient funds" threshold will keep dropping too. It's a relief more than a concern, because I don't trust that big models from American or Chinese labs will always be aligned with what I need. There are probably a lot of people in the world whose interests are not especially aligned with the interests of the current AI research leaders.
"Don't turn the visible universe into paperclips" is a practically universal "good alignment" but the models we have can't do that anyhow. The actual refusal-guards that frontier models come with are a lot more culturally/historically contingent and less universal. Lumping them all under "safety" presupposes the outcome of a debate that has been philosophically unresolved forever. If we get hundreds of strong models from different groups all over the world, I think that it will improve the net utility of AI and disarm the possibility of one lab or a small cartel using it to control the rest of us.
I mean that does partially reduce the chances of a cartel, but not really near as likely as you think.
Most countries have a pretty strong ban on most kinds of weapons, the US is one of the few that lets everyone run around with their rooty tooty point and shooty, but most countries have implemented bans. Some because the government doesn't want the people having them, and in others the citizens call for the bans because they don't like the idea of getting shot by their fellow citizens.
It won't be long before citizens and governments get tired of models being used for criminal activities and will eventually lay down laws around this. Models will have to be registered and safety tested, strict criminal prosecution will happen if you don't. And the big model companies will back their favorite politicians to ensure this will happen to.
Now, that in general will be helpful as there will still be more models, but it will still not be a free for all.
The argument is that it's misaligned because it only values one thing: more paperclips, while human values are much more varied and complex.
Debatable whether it truly understands what it's doing or not, but the argument usually assumes that it does know what it's doing at least in that it's able to imagine outcomes and create plans to reach its singular goal, making it a very simple toy example of a misaligned system.
At least one of the test questions was just a screen shot from a tweet. It was difficult to read. I'd suggest extracting text from screen shots with OCR. Apple has built-in functionality for this on their operating systems with Live Text. There are strong open source systems based on small vision language models for this, too. The one I have been recommending lately is GLM-OCR:
It's fast and can run even on low-resource computers.
---
Does this CAPTCHA actually resist computers? I didn't try feeding the questions I got to an LLM, but my sense is that current frontier models could probably pass all of these too. Making generated text pass the pangram test is simple enough for someone actually writing a bot to spin up automated accounts.
It also started importing liquid natural gas in 2023.
But it has abundant sunlight, access to low cost Chinese solar panels that will produce electricity for decades instead of being burned once, and a substantial domestic photovoltaic manufacturing industry of its own.
"Renewable Energy Investments in Vietnam in 2024 – Asia’s Next Clean Energy Powerhouse" (June 2024)
In 2014, the share of renewable energy in Vietnam was just 0.32%. In 2015, only 4 megawatts (MW) of installed solar capacity for power generation was available. However, within five years, investment in solar energy, for example, soared.
As of 2020, Vietnam had over 7.4 gigawatts (GW) of rooftop solar power connected to the national grid. These renewable energy numbers surpassed all expectations. It marked a 25-fold increase in installed capacity compared to 2019’s figures.
In 2021, the data showed that Vietnam now has 16.5 GW of solar power. This was accompanied by its green energy counterpart wind at 11.8 GW. A further 6.6 GW is expected in late 2021 or 2022. Ambitiously, the government plans to further bolster this by adding 12 GW of onshore and offshore wind by 2025.
These growth rates are actually much faster than growth rates in the US.
If you have a basic ARM MacBook, GLM-OCR is the best single model I have found for OCR with good table extraction/formatting. It's a compact 0.9b parameter model, so it'll run on systems with only 8 GB of RAM.
Then you can run a single command to process your PDF:
glmocr parse example.pdf
Loading images: example.pdf
Found 1 file(s)
Starting Pipeline...
Pipeline started!
GLM-OCR initialized in self-hosted mode
Using Pipeline (enable_layout=true)...
=== Parsing: example.pdf (1/1) ===
My test document contains scanned pages from a law textbook. It's two columns of text with a lot of footnotes. It took 60 seconds to process 5 pages on a MBP with M4 Max chip.
After it's done, you'll have a directory output/example/ that contains .md and .json files. The .md file will contain a markdown rendition of the complete document. The .json file will contain individual labeled regions from the document along with their transcriptions. If you get all the JSON objects with
"label": "table"
from the JSON file, you can get an HTML-formatted table from each "content" section of these objects.
It might still be inaccurate -- I don't know how challenging your original tables are -- but it shouldn't be terribly slow. The tables it produced for me were good.
I have also built more complex work flows that use a mixture of OCR-specialized models and general purpose VLM models like Qwen 3.5, along with software to coordinate and reconcile operations, but GLM-OCR by itself is the best first thing to try locally.
I also get connection timeouts on larger documents, but it automatically retries and completes. All the pages are processed when I'm done. However, I'm using the Python client SDK for larger documents rather than the basic glmocr command line tool. I'm not sure if that makes a difference.
Cool! For GLM-OCR, do you use "Option 2: Self-host with vLLM / SGLang" and in that case, am I correct that there is no internet connection involved and hence connection timeouts would be avoided entirely?
When you self-host, there's still a client/server relationship between your self-hosted inference server and the client that manages the processing of individual pages. You can get timeouts depending on the configured timeouts, the speed of your inference server, and the complexity of the pages you're processing. But you can let the client retry and/or raise the initial timeout limit if you keep running into timeouts.
That said, this is already a small and fast model when hosted via MLX on macOS. If you run the inference server with a recent NVidia GPU and vLLM on Linux it should be significantly faster. The big advantage with vLLM for OCR models is its continuous batching capability. Using other OCR models that I couldn't self-host on macOS, like DeepSeek 2 OCR or Chandra 2, vLLM gave dramatic throughput improvements on big documents via continuous batching if I process 8-10 pages at a time. This is with a single 4090 GPU.
It takes time for statistical agencies to compile reports. I haven't yet found reports covering the growth in renewable generation (actual terawatt hours) for all of 2025. But this covers 3 quarters of the year:
In the first three quarters of 2025, solar generation rose by 498 TWh (+31%) and already surpassed the total solar output in all of 2024. Wind generation grew by 137 TWh (+7.6%). Together, they added 635 TWh, outpacing the rise in global electricity demand of 603 TWh (+2.7%).
Ember is an absolute treasure. Often you'll see articles on HN from places like Elektrek which are blogspam linking back to Ember's original reporting.
Their electricity data explorer is to my knowledge the most complete on the open internet.
I agree that this dream of huge returns is luring investors.
I don't think that it will actually work that way. The barriers to making a useful model appear to be modest and keep getting lower. There are a lot of tasks where some AI is useful, but you don't need the very best model if there's a "good enough" solution available at lower prices.
I believe that the irrational exuberance of AI investors is effectively subsidizing technological R&D in this area before AI company valuations drop to realistic levels. Even if OpenAI ends up being analogous to Yahoo! (a currently non-sexy company that was once a darling of investors), their former researchers and engineers can circulate whatever they learned on the job to the organizations that they join later.
reply