I work in the domain, and it is true that many of the startups will almost entirely use free data, like from the sentinel satellites via ESA. It really lowers the barriers to entry, if you have a nice idea.
Apple's 30% tax was introduced under Steve Jobs and there were no small business exemptions back then. Jobs died in 2011. It's time to stop extrapolating what Jobs would be doing 15 years later in 2026 if he were still around. Could be the same, could be better, could be worse.
> Curious what extra protection this gives you, considering the environment variables are, well, in the environment, and can be read by process. If someone does a remote code execution attack on the server, they can just read the environment.
While your secrets are available at runtime, you get a lot of governance by placing them in something like a keyault. You get an audit trail, you can setup rotation policies. It's easier to reference different secrets for dev, test, prod etc. I'd argue that there is a lot of added security in the fact that your developers won't actually need any sort of access to a secret stored in a keyvault.
Been working with spec-driven development since it first emerged (OpenSpec fan), and kept running into the same gap: specs get written, AI writes the code, but validating the implementation against the spec is still a manual slog.
So I put together a tool that runs multi-agent code reviews inside agentic AI environments (Claude Code, Cursor, Windsurf, etc).
The idea is simple – instead of one reviewer, you get a few different perspectives, modeled after common engineering teams: someone thinking architecturally, someone focused on code quality, another with security expertise, a QA engineer paying close attention to testing. They review changes independently, then discuss before synthesizing feedback. You can also configure custom reviewers and adjust redundancy if you want multiple sub-agents for a given role.
I originally built it to complement SDD tools like OpenSpec, but it drops into any codebase within an agentic environment (agentic IDE or agentic terminal).
My team has been running it against some larger codebases (~500k lines) and it's been genuinely useful – not as a replacement for human review, but as a way to catch things earlier. No doubt it has already saved us quite a bit of time.
```bash npx @open-code-review/cli init /ocr-review ```
Works as a pre-push sanity check or wired into CI.
Though warning: your token expenditure WILL increase, especially if you ramp up the number of reviewer types or their redundancy configs.
That said, curious what others think – is this useful? What's missing?
What’s happening in this site? The page loads and number starts going up from 47 and the it says “You fell behind reading this”. And I start scrolling and paras of text start floating up. I am really confused
That's weird too, maybe they just have some preorders they need to fulfill. They did stop its production for a while last year and reduced the number of models available.
Last time I answer you, because at this point you are acting in bad faith:
Telefónica Audiovisual Digital SLU is suing, among others, TELEFÓNICA ESPAÑA S.A.U and TELEFÓNICA MÓVILES ESPAÑA S.A.U.. What do you think is going on there? Come on, you don't even have to be that smart: they are suing themselves to get a judge order that allows them to block the IPs. In fact, some of them are eagerly waiting for the judge permission to click the ban button the next second.
> You are getting this wrong. The judge isn't acting on their on initiative here, but because La Liga (together with Movistar+) sued the biggest ISPs in Spain.
The nerve you have. This is exactly what I was saying from the beginning: the ban is not something that comes from the state or the government. The ban is something that the ISPs are asking to do, but they need to get the OK from a jugde. The discussion here was if there are parallelisms between the Iran Internet blackout (state initiated and enforced) and Spain banning some Internet IPs during football matches due to piracy (private companies initiating and enforcing on their users), because some of you were painting Spain as some kind of 1984 state.
Your last part is FUD, and a slippery slope fallacy. There are ways to refuse to comply, for example not banning the IPs and then claiming technical difficulties to do so. In fact, that was what were doing all the ISPs that were requested at first to ban Cloudflare IPs: delay the ban for a couple of hours, the football match ended, and they did nothing claiming it was impossible to comply. No one was locked, forced to comply or summarily (you are not using this word correctly, because it means "without trial") executed. Week after week, they are not issuing the IP banning, and they still safe and sound.
A private blockchain is an oxymoron. The point of a blockchain (as it's understood by the crypto world, at least) is for it to be publically readable by anyone.
I never get the single threaded assertions regarding CPU performance, it is mostly useless in the day of premptive scheduling in modern OSes.
Yes it matters on MS-DOS like OS design, like some embedded deployments and that is about it.
It is even impossible to guarantee a process doesn't get rescheduled into another CPU with the performance impact it entails, unless the process explicitly sets its CPU affinity.
Yes, decriminalization and harm reduction seems to work in a few places where it has been implemented for ages now.
I mean, there are still many places in Europe where you can go to prison for years for smoking weed. The life ruined because you smoked an illegal plant in your own home.
In all seriousness, I don't know a ton about her, but if her Wikipedia is to be believed, the only thing she is guilty of is being a little extra while living through a genocide.
> In the nineteenth century, cities grew quickly. Between 1800 and 1914, the population of Berlin’s metropolitan area grew twenty times, Manchester’s twenty-five times, and New York’s a hundred times. Sydney’s population grew around 240 times and Toronto’s maybe 1,700 times. Between 1833 and 1900, Chicago’s population grew around five thousand times, meaning that on average it doubled every five years.
> Raw population growth understates the speed of expansion. The number of people per home fell, and, in Britain and America, the size of the average home roughly doubled. At the same time, those homes fit on a smaller share of land, with huge swaths given over to boulevards, parks and railways. The expansion in surface area was thus often several times greater than the expansion in raw population. Meanwhile, real house prices remained flat, while incomes doubled or tripled, generating a huge improvement in housing affordability. Far more people were enjoying far larger homes for a far smaller share of their income.
I was curious but it’s surprisingly hard to find info. These guys [1] are pretty stoked about “nowcasting”—which seems to be on sub-10-minute timescales to issue local severe weather warnings and such. It appears current sounders don’t scan as often.
This project ppt from 2011 [2] references different requirements for different areas/teams and shows the instrument spits out readings at 150 Mbit/s, which seems like a good clip. Overall it sounds like a lot of local knowledge is involved in turning this output into forecasts. Maybe there’s not a precise answer to your question.
>To get to the point of executing a successful training run like that, you have to count every failed experiment and experiment that gets you to the final training run.
I get the sentiment, but then, do you count all the other experiments that were done by that company before specifically trying to train this model? All the experiments done by people in that company at other companies? Since they rely on that experience to train models.
You could say "count everything that has been done since the last model release", but then for the same amount of effort/GPU, if you release 3 models does that divide each model cost by 3?
Genuinely curious in how you think about this, I think saying "the model cost is the final training run" is fine as it seems standard ever since DeepSeek V3, but I'd be interested if you have alternatives. Possibly "actually don't even talk about model cost as it will always be misleading and you can never really spend the same amount of money to get the same model"?
I am sure they thought about it. I mean, that’s the first thing that comes to mind and I never really studied wood. So I am not going to assume that they ignored the obvious.
That said, wood can be treated to remove quite a lot of stuff, leaving behind a strong porous structure that can be filled with various things to tweak its properties.
For me, this isn’t just a curiosity—it’s a question of whether I need to completely change the way I work. Given the same conditions, I’d rather revisit the domain knowledge and get better at instructing AI than write everything by hand.
Outside the areas AI struggles with or really deep domain expertise, I feel like comparing productivity just doesn’t make sense anymore.
All good when it's really like that and a) not a cope b) due to the risk profile VCs/bigger financial players in the small local swamp just eating up cool companies and founder potential for pennies.
A local belgian startup just got acquired, and I was hyped, but then I realized at what price, and honestly it just seems like a waste of time for the whole team, maybe even including the founders, I think the amount is 1000x less than what a US product would faire.
You could not possibly compare Sydney and London, they are very different. London is a bustling diverse city, Sydney is a nice (big) town. Sydney is a great place, and London is far from perfect but they are not in the same conversation. They are different.
The Global Liveability Index is, essentially, highlighting the most middle of the road cities. London (as with New York) has huge disparities and that guarantees it will never rank well on the Global Liveability Index. The people who choose to live in London and love London (as with the people who choose to live in New York and love New York) do not choose it because it is average.
I have lived in London and Sydney and many other cities. I have fallen out of love with London. I would rather live in Sydney than London. I still cannot imagine ever describing Sydney as a better city than London. Just as I can't imagine describing Copenhagen better than London.
Healthcare in London is world class. A city is crowded. The weather is very average for Europe.
People from Sydney who move to London come to hate it, once the novelty wears off, just as they would with New York, because the Australian way of life is very different.