Gross. This is just more proof that corporations simply don't know how to market AI. Everything is an ad for an ad at this point. The very first thing they show this new machine doing is helping people shop for clothes using AI.
No one is doing that, these people don't exist. No matter how hard corporate America wishes they did. This is why AI doesn't sell. This is why companies like Microsoft and Dell are pulling back on their AI claims and why Apple has nearly wiped it off their site all together, seriously go check out apple.com, not a single mention of Apple Intelligence.
At this point I'm convinced that marketing has been completely taken over by shareholder shills, marketing to customers they wish they had instead of the real customers that exist.
Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
What's funny is that these days if I see a Google product that I'm even remotely interested in, I just immediately write it off because I know it's something they will kill in a very short time frame.
It's just never worth the hassle of buying/using a Google product. Never.
Please be careful when revoking tokens. It looks like the payload installs a dead-man's switch at ~/.local/bin/gh-token-monitor.sh as a systemd user service (Linux) / LaunchAgent com.user.gh-token-monitor(macOS). It polls api.github.com/user with the stolen token every 60s, and if the token is revoked (HTTP 40x), it runs rm -rf ~/.
GitLab could be the perfect case study on AI-powered efficiency improvements. I have never interacted with a piece of software that, for every single problem I found, there was an open issue always at least 4-7 years old that was just being shuffled around by managers adding and removing random labels.
Surely with all of these ridiculous developer productivity gains enabled by AI, they should finally be able to fix all of these ancient issues quickly and clean up the backlog.
Nope, “workforce reduction” thanks to AI again. This charade is getting boring.
For the past days I've been participating(albeit over Teams) in a conference relevant to my industry (intel), basically startups and established companies showcasing their products to a closed audience of EU gov. officials.
One thing I noticed right away, is that all companies were asked "Can we fully host this from within EU or our country" from the various people in audience. Every single one. Many of the startups had slides prepared for this.
Definitely a change, because it is not something I can recall being important just a couple of years ago.
Full disclosure: I've never owned a Bambu because I've never loved the idea of a "closed" ecosystem 3D printer, however I have used them, and am very familiar with the 3d printing space beyond Bambu.
For anyone considering alternatives: You should know that almost all other 3D printers expect you to know a little more about how they actually work than Bambus. Bambus are as close as you can get to a "just works" type experience, but modern alternatives from others are nowhere near as hard as they used to be.
The closest "easy" alternative is probably Prusa, but you'll pay significantly more for a Prusa machine than you would a Bambu. They're an excellent company, and the complete opposite of Bambu when it comes to Openness. If money is no object, Prusa is highly recommended.
I personally run an old Elegoo Neptune 4 pro - but my needs are quite low. If I were buying today, a Snapmaker U1 or the Creality K2 Plus is probably where I'd end up going.
I think if I wanted a cheap laptop I'd probably get the macbook neo, and if i wanted a non-gaming expensive one i'd get a macbook pro.
I really don't see the market fit for this, I guess the android integration. But my god, I'd die of cringe if someone asked me about my laptop and I had to say "googlebook". Believe it or not, these things matter a lot, particularly if you're trying to target a young audience.
Python is locally readable. Reasoning about larger systems in Python is where things get really hard, because you have to describe how many small individually readable things interact with each other in a very limited vocabulary.
The fact that management signed off on measuring AI use through token usage shows how incompetent management really is, including in allegedly technical conmpanies like Amazon. Tokenmaxxing was an entirely expected and rational response. IOW You measure employees in stupid ways, you're going to get stupid behaviour as a consequence.
There's a 'github down' post here every other day.
The ball is right there, bouncing alone in front of the goal, and they just have to position themselves as "we're the stable ones" to score that market when the exodus inevitably happens.
Wow gitlab. Right when everyone was looking to see if you could lead with all the fails at github, you basically said "We're going to throw our source at ChatGPT and see what happens"
It's maddening that quite a few people are jumping to defend Bambu here.
Principally if you sell a device with a certain functionality and you later modify that device later to remove that functionality that is called theft. It does not matter the slightest bit whether you break into someone's house to physically alter the device or whether you remotely install a malicious software update to do that.
But what's even more insane here is that some people are claiming that BambooLabs would somehow have the right to do this, because while BambooLab might not have the right to limit the hardware they already sold (which they did and these people just pretend did not happen) they have the right to limit their printer client software under the license conditions they impose on it from the beginning, when their printer client is literally a modification of AGPL licensed software. The entire point of the GPL is to prevent people like BambooLabs from doing exactly this. The AGPL is literally the single license with the most restrictions on BambooLabs to ensure that the users of the software — the customers — do not have any restrictions in what they can do with it.
Some people are seeing this situation and just decide to side with the company against their customers on imposing restrictions on an already sold product after the sale and they are literally making shit up to justify it.
Edit: For people who do not know what this is about: Someone modified AGPL software to reenable features of these 3D printers that BambooLabs stole after the sale and BambooLabs sent a legal threat to them to stop distributing the software.
On one of my very first jobs in around 2000 I got paired with a much more experienced software engineer. He’d been a pro since the early 70s. I was stoked to learn from him.
On like my fourth day he said “now I’m going to teach you the thing that helped me the most in my career…” I waited, ready for the received wisdom. And he said “always number your punch cards so if you drop them they will be easy to put back into order”. I was upset. We were long past the point where punch cards were in use. And then he said “I said what would help _me_ the most, not what would help _you_. Software is always changing”.
I can't help thinking about how much we have lost. Just finding the scrollbar nowadays can be a challenge. Not to mention if you want to resize a pane - in some applications they seem to have taken extra steps to make it difficult to find the line to grab.
This is pretty easy to solve. If you present data by algorithm, you are no longer an impartial common carrier and are liable for the content you present. If the user decides you don’t, ala social media 1.0.
I think the idea is that you can launder your team or product AI spend through your AWS account. This matters in Enterprise. It looks like the difference with Bedrock is that you access more "Claude platform" stuff than just the model.
More charitably, this lets an org heavy on AWS use their existing IAM / SSO / Finops processes to manage Claude stuff, this is genuinely helpful when otherwise you have to go thru several teams and build out whole new rails to adopt.
I can't think of a single way in which the United States came out ahead in the war. We have
* Demonstrated that the US simply can't offer any meaningful security guarantee to it's middle east partners.
* Permanently ceded de facto control over the straits of Hormuz to Iran
* Significantly strengthened the hardliners in the Iranian regime and cleared the way for them to have absolute power by eliminating all moderates
* Spiked inflation at home and doubled down on pissing off pretty much every single country except Russia by heaping sky rocketing energy costs on them
* Exposed the perilous state of of the defense industrial base (in spite of us spending more than the next 10 countries combined). We simply can't produce enough military hardware to sustain a sustained conflict with a country like Iran. I shudder to think just how badly we will be outmatched in a shooting war with China.
All of this to get to a point where we are negotiating a deal which is worse than what we already had with the JCPOA.
I think we will look back on this as the US version of the Suez crisis, the beginning of the end of the US empire.
Obsidian CEO here. We've been working for nearly a year to launch this new Community site and review system. I'm very excited about this first version but there are many more improvements to come.
I've tried to be exhaustive with the blog post, FAQs, and next steps on our roadmap, but I am sure I forgot some things, so feel free to ask!
This has been an incredibly challenging project for a number of reasons. We're only seven people but we have thousands of plugin developers and millions of users. There are many competing priorities to balance.
We wanted to make sure the new system would be easy to adopt, backwards compatible, and not completely break people's workflows, while still being a major improvement over the old approach, and allow us to gradually continue enhancing security and discoverability of plugins.
Consider it a work in progress. We're listening to everyone's ideas and gripes, and will keep iterating :)
One obvious reason is Python's extreme readability, it has often been described as being as close to executable pseudo-code as one can get.
If you're using an LLM to write code I think the rules would be
1. Use a language you know really well so you can read it easily, and add to it as needed.
2. Use a language that has a large training set so the LLM can be most efficient.
3. Use a language that is easy to read.
If your language has a small training set or you don't intend to do much addition or you don't really know any language that well or are restricted from using choice 1 for some reason, 2 and 3 move up, and python has a large training set and it is easy to read.
Operating systems of that era were designed based on UX research to help people use the unfamiliar operating system.
Subsequent ones were designed by UI designers, and opinionated senior managers, who already knew how to use them, and took out usability features to make them "look nicer". This sort of worked when the opinionated manager was Steve Jobs. Most managers are not Steve Jobs.
> in some applications they seem to have taken extra steps to make it difficult to find the line to grab
Pet peeve of mine in Windows where the line is at most one pixel now. They also took away the coloured distinction between title bars for the active window, so you don't know where keystrokes are going to go.
A lot of the conclusions they're drawing in this post about the "agentic era" seem quite misguided and some don't really seem to make sense.
I have no doubt GitLab has too many employees and can benefit from being a more focused company, but it's tiring reading these layoff posts so chock full of buzzwords. I guess they're desperately hoping if they prognosticate about AI enough it will placate the investors.
Quick update for the folks passionate about space things (since this thread is full of unrelated comments):
V3 is their first Starship family big upgrade, containing lots of learnings from previous tests, and the big engine upgrades. V3 engines are the first iteration of a production engine, with lots of sensors and auxiliary systems integrated into the engine itself. Besides the improvements in thrust, they've streamlined the production, moved a lot of stuff "inside" the engine (the first iterations looked like something out of the steampunk era), and they've simplified lots of fire/heat protection.
The Booster and Ship also got some major redesigns in the way they're handling fuel, the "thrust puck" (the area where the engines get mounted) and so on. It's also a bit taller, helped by the engine upgrades. TWR has also improved, with estimates at 1.6. This should be visibly faster to clear the tower and "jump" the launch.
They are also adding ~44tons of simlinks (starlink simulators, dumb payloads). So they seem to have improved the margins for orbital payload a lot. New this launch will be a few sats that have comms & cameras on them. Hopefully we'll get to see outside shots of Starship from these things, on orbit. They've filed FCC paperwork for this, and they'll likely use it to inspect the health of the heatshield on orbit.
They've also updated the launch tower, with a flame deflector, and a new deluge system.
This flight will be still suborbital, testing payload deployment, booster return to a fixed point somewhere in the coastal waters, and the ship aiming for somewhere in the Indian Ocean. They've also removed some parts of hte heatshield, to test how it handles that. (on a previous flight the ship still nailed its simulated landing with huge gaps in it, from multiple tiles missing intentionally).
If everything works on this flight, the next one is planned to be orbital.
Not super relevant to the Googlebook ad, but in case the perspective is interesting to you: I'm quite tall (194cm) but not very wide, so I usually struggle with buying clothes online. I used AI to scrape a bunch of clothing stores to see whether they sold a men's shirt with an LT or slim fit size, in stock, and matching a particular vibe.
No one is doing that, these people don't exist. No matter how hard corporate America wishes they did. This is why AI doesn't sell. This is why companies like Microsoft and Dell are pulling back on their AI claims and why Apple has nearly wiped it off their site all together, seriously go check out apple.com, not a single mention of Apple Intelligence.
At this point I'm convinced that marketing has been completely taken over by shareholder shills, marketing to customers they wish they had instead of the real customers that exist.