i love this but dont use emacs. i wish that existing tools were lighter weight at video trimming. Screenflow is the lightest i know of but fails badly at some video formats and sometimes OoMs. if it had a better architecture streaming bytes i feel like it might not have that problem.
Reading through the comments under this thread, there are many users who swear by a plain text file, but who then build quite a lot of snowflake software to regain functionality offered by more structured TODO applications. That includes:
- having your computer alert you to things that come up
- being able to tag notes
- being able to add events to a calendar
- being able to set priority of tasks
- expecting prioritized/currently relevant tasks to be at the top of the agenda
- being able to add recurring tasks
- full-text search (grepping)
- formatting features (markdown)
Some of the laborious (or, in my opinion, plain unholy) solutions include:
- feeding TODOs to an LLM to filter for the currently relevant ones and send Telegram notifications
- hand-copying currently relevant tasks to the top of the TODO list
- running a script on a VPS to sync notifications
- set up cron job with git commit
- writing post-it notes by hand
I would encourage everyone to try out emacs with org-mode. It takes some time to get used to the editor and its keybindings (though provisions exist for vim users), but _every_ item on the list above is handled out of the box, or is offered through a free and maintained plugin.
The author of the OP claims to have tried _every_ todo app, and has afterwards moved (regressed?) to writing notes in a plain text file, but there is a path extending from this point that the author has not walked yet. I strongly suggest that, especially for people with a computing or technical background, it is an undisputed upgrade. https://doc.norang.ca/org-mode.html being the bible, of course.
Why do all those micro framework copy the worst aspects of flask?
The global request object, the bad integration with sqlalchemy, the strong coupling with an application object that forced the indirection of blueprints...
If you start again from scratch, forcing people to migrate their stack and therefor leave their ecosystem behind, at least fix the glaring API problems.
I'd love to see this analysis done for ChatGPT, which has a much bigger 'consumer' marketshare.
I'm also very wary of their analysis method, given classifiers-gonna-classify. We already see it in their example of someone asking why their game is crashing and it buckets them into Computer & Mathematical occupation. I'm guessing the original question was not that of a game developer but rather a game player, so can you really call this an occupational task? Sure it's in that domain, I guess, but in a completely different context. If I'm asking a question about how to clean my dish washer, that's hardly in repairman or industrial occupations.
It's because platforms can deal with feature complexity and UX standardisation in a way that protocols can't.
Multi-protocol clients tend to end up a mess compared to the integrated experience of a platform which can provide a single source of truth for identity, authentication, and so on.
Netscape Communicator ticked many of the boxes of Facebook years earlier, but by kludging together NNTP, HTTP, SMTP, POP3, FTP etc., and that's before you consider the difficulty of moderating an open syndication like Usenet or IRC, or the pain in the ass that email spam had become by the early 00s.
Protocol/standards people like to think they care about UX, but for platform companies, user growth and retention literally pays their bills. It's just a different set of incentives.
And to be clear, I prefer the more open internet, but UX wise, it never stood a chance against normie-optimised, integrated platforms.
Being one of those lucky few at Google Brain who switched into ML early enough to catch the wave... I fielded my fair share of questions from academic friends about how they could get one of those sweet gigs paying $$$ where they could do whatever they wanted. I wrote these two essays basically to answer this class of questions.
When asked what their favorite part of the trip was, they responded..
The hot tub.
At the hotel.
My kids light up the most when I am fully engaged with them, fully present, entertaining their ideas, and asking questions.
Their favorite family trip so far? When we traveled to Arkansas to mine for crystals. AKA, dig in the dirt all day. They saw it on a YouTube video. They asked to go. So we obliged. I had never been to Arkansas. It's beautiful.
We stayed at a resort, Diamonds Old West Cabins, with a huge playground outside the cabins, archery, and a bubble party every evening at 6 pm.
The issue is that "workflow orchestration" is a broad problem space. Companies need to address a lot of disparate issues and so any solution ends up being a giant product w/ a lot of associated functionality and heavily opinionated as it grows into a big monolith. This is why almost universally folks are never happy.
In reality there are five main concerns:
1. Resource scheduling-- "I have a job or collection of jobs to run... allocate them to the machines I have"
2. Dependency solving-- If my jobs have dependencies on each other, perform the topological sort so I can dispatch things to my resource scheduler
3. API/DSL for creating jobs and workflows. I want to define a DAG... sometimes static, sometimes on the fly.
4. Cron-like functionality. I want to be able to run things on a schedule or ad-hoc.
5. Domain awareness-- If doing ETL I want my DAGs to be data aware... if doing ML/AI workflows then I want to be able to surface info about what I'm actually doing with them
No one solution does all these things cleanly. So companies end up building or hacking around off the shelf stuff to deal with the downsides of existing solutions. Hence it's a perpetual cycle of everyone being unhappy.
I don't think that you can just spin up a startup to deliver this as a "solution". This needs to be solved with an open source ecosystem of good pluggable modular components.
yeah this is very nice, open source Martian. i never really understood the value of routing all the time. you want stability and predictability in models. and models have huge brand value. you're never going to, through routing, construct a "super" model that people want more than one or a few really good brand name models.
A simpler alternative is xylitol. Not a drug, no FDA approval required. It's a plant-based sweetener that cavity-causing mouth bacteria love to ingest, but which provides no sustenance to them. It essentially fills them up and then causes them to starve them to death, helping maintain minimum mouth bacteria. No bacteria, no cavities. Get it in mints or gum like Zellies or PUR (the only two I've found that don't include Titanium Dioxide). Take one a day after brushing in the evening so it kills bacteria overnight.
The way out is authenticity. Signed content is the only way to get that. You can't take anything at face value. It might be generated, forged, et. When anyone can publish anything and when anyone is outnumbered by AIs publishing even more things, the only way to filter that is by relying on reputation and authenticity so you can know who published what and what else they are saying.
Web of trust has of course been tried but it never got out of the it's a geeky things for tin foil hat wearing geeks kind of corner. It may be time to give that another try.
I git cloned the repo and then ran sloccount after checking out various commits (just did `git log | grep -C3 'Jan 1 [0-9:]* 2017'` or similar to find the relevant commits)
Computer history is one of my favorite topics, so I've read a lot over the years. Here's my list:
>> Classic computer history:
- "Hackers: Heroes of the Computer Revolution", Steven Levy
- "The Innovators", Walter Isaacson
- "Valley of Genius: The Uncensored History of Silicon Valley", Adam Fisher [innovative format, tons of interesting tidbits after you get used to the style. Read only after the other two above]
- "The New New Thing: A Silicon Valley Story", Michael Lewis
- "The Second Coming of Steve Jobs", Alan Deutschman
- "Revolution in The Valley: The Insanely Great Story of How the Mac Was Made", Andy Hertzfeld
- "Masters of Doom", David Kushner
- "Idea Man", Paul Allen
- "Where Wizards Stay Up Late", Katie Hafner
>> Entertaining stories, but less historical value:
- "Ghost in the Wires", Kevin Mitnick
- "Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley", Antonio Garcia Martinez
- "Hatching Twitter: A True Story of Money, Power, Friendship, and Betrayal", Nick Bilton
>> On my to-read queue:
- "How the Internet Happened", Brian McCullough [just started; very promising]
- "Troublemakers: Silicon Valley's Coming of Age", Leslie Berlin
- "Code Girls: The Untold Story of the American Women Code Breakers of WWII", Liza Mundy
- "Fire in the Valley: The Making of The Personal Computer", Paul Freiberger / Michael Swaine
>> Others worth mentioning (but just read a few chapters):
- "The Supermen: The Story of Seymour Cray", Charles Murray [about Cray Computers]
- "Racing the Beam" [about Atari]
- "Commodore: A Company on the Edge" [about Commodore]
>> Bonus:
- "Art of Atari", Tim Lapetino [great as a coffee table book, particularly if you grew up in the 80's :) ]
Can I quote zach's 15 year old comment on Python vs Ruby? Maybe it will have some relevant insights:
> Ruby has clever syntax. Python has pure syntax.
> Ruby has method aliases. Python does not allow a string to capitalize itself.
> Ruby uses Ruby methods within Ruby classes to extend Ruby. Python has decorators so you can write functions that return functions that return functions to create a new function.
> Ruby has strict object-oriented encapsulation. Python is laid-back about objects, because you probably know what's going on inside them anyway.
> Ruby lets you leave off parentheses so you don't miss objects having attributes too much. Python will let you mix tabs and spaces for indentation, but passive-aggressively mess up your scoping as punishment.
> Ruby has seven kinds of closures. Python has one, in the unlikely case a list comprehension won't do.
> Ruby's C implementation is a mess of support for language-level flexibility. Python's C implementation is so clean you get the unsettling thought that you could probably write Python using C macros.
> Ruby supports metaprogramming for cases when programmers find it more descriptive. Python supports metaprogramming for cases when programmers find it necessary.
> Ruby is expressive. Python is direct.
> Ruby is English. Python is Esperanto.
> Ruby is verse. Python is prose.
> Ruby is beautiful. Python is useful.
> I like Python, but coming to it after using Ruby for seven years, well, I think it's like dog people and cat people. You can like having one species around, but you're always thinking -- why they can't be more like the other?
Recent W&M Condensed Matter Physics Grad. Worked closely with HT Kim, not on this project. He is a trustworthy guy, knows his stuff. I think he is right when he calls the paper very sloppy, I am confused why there is no phase diagram and the sample purity seems suspect. These are things I think would have been addressed in peer review and would give me more confidence overall. Probably not fraud, but doesn't mean it's superconductivity.
Not optimistic about replication in the next week too, Solid State Synthesis seems "easy" but in my experience can be problematic. Not an expert in that part though
The codebase of an internal system our team inherited is pretty old (14 years) and large, and there's a feeling a lot of the stuff is unused (dead code or unused ancient features) and can be deleted. Fortunately, it's a monolith and all that matters is HTTP endpoints defined in a few config files. So we've made a script which downloads all access logs from the live server for the last 3 months and tries to find endpoints with zero or very low usage (it did find quite a few unused or rarely used endpoints). I'm planning the second step to be a script which builds a dependency graph and finds everything which is only reachable from unused endpoints. I wanted it to be a one-time endeavor but this blog post made me think it should be run on a regular basis.
Warning: long somewhat related story that is basically humblebragging, but the summary is that bypassing Twitter ratelimits is not very hard.
I didn't feel like playing around with Twitter's annoying certificate pinning so I just uploaded the Twitter APK to Corellium, turned on what they call the "network monitor", opened the Twitter app since it lets you use Twitter without signing in. I clicked around, searched and viewed tweets. Then I looked at the requests in the log and saw it has a similar guest token process to the website but with a few differences. Anyways, if you recreate these requests, with one IP address you can generate a few OAuth tokens with no expiry per day. These tokens are for unauthenticated users so obviously they have no write privileges but that's not what was needed here. So if you have a proxy provider with a large pool of IPs where you can buy like 1GB of bandwidth you can use a very small percent of your bandwidth allowance and get thousands of tokens/secrets easily, all with their own separate rate limits. It doesn't even matter what IP you end up using the tokens on. Then I followed https://docs.google.com/document/d/1xVrPoNutyqTdQ04DXBEZW4ZW... and the fact that /statuses/lookup.json still allows you to return 100 (!) tweets at once to reconstruct something close to what the 50% Twitter firehose would look like. And Twitter doesn't even block datacenter IP addresses! Was going to display the data at https://firehose.lol but the fact that it required a few hundred requests a second made me feel bad so I didn't end up running the program for more than a few minutes at a time and shut it down.
Looking at (a fraction of) the Firehose for a few minutes was interesting, originally I accidentally forgot to not display tweets labelled possibly_sensitive so I saw some pretty salacious material for a few seconds. Lots of Chinese gambling ads even though Twitter is blocked there, dubious investment promoters, accounts with usernames like FirstnameLastname3781264872 who would tweet three random words at each other every couple of seconds, and a handful of funny tweets.
Practical report: the OpenAI API is a bad joke. If you think you can build a production app against it, think again. I've been trying to use it for the past 6 weeks or so. If you use tiny prompts, you'll generally be fine (that's why you always get people commenting that it works for them), but just try to get closer to the limits, especially with GPT-4.
The API will make you wait up to 10 minutes, and then time out. What's worse, it will time out between their edge servers (cloudflare) and their internal servers, and the way OpenAI implemented their billing you will get a 4xx/5xx response code, but you will still get billed for the request and whatever the servers generated and you didn't get. That's borderline fraudulent.
Meanwhile, their status page will happily show all green, so don't believe that. It seems to be manually updated and does not reflect the truth.
Could it be that it works better in another region? Could it be just my region that is affected? Perhaps — but I won't know, because support is non-existent and hidden behind a moat. You need to jump through hoops and talk to bots, and then you eventually get a bot reply. That you can't respond to.
My support requests about being charged for data I didn't have a chance to get have been unanswered for more than 5 weeks now.
There is no way to contact OpenAI, no way to report problems, the API sometimes kind-of works, but mostly doesn't, and if you comment in the developer forums, you'll mostly get replies from apologists that explain that OpenAI is "growing quickly". I'd say you either provide a production paid API or you don't. At the moment, this looks very much like amateur hour, and charging for requests that were never fulfilled seems like a fraud to me.
So, consider carefully whether you want to build against all that.
I absolutely love ffmpeg, but for the life of me I cannot understand how its pipeline system works.
Each time I need to use it, I attempt to construct the command myself, but end up giving up and consulting StackOverflow. Amazingly, someone has usually done the exact thing I need to do and posted their command line to StackOverflow, so I'm never out of luck!
How do I actually start understanding how ffmpeg works? I want to be an ffmpeg power user.
Not a deep learning expert, but: it seems that without backpropagation for model updates, the communication costs should be lower. And that will enable models that are easier to parallelize?
Nvidia isn't creating new versions of its NVLink/NVSwitch products just for the sake of it, better communication must be a key enabler.
Can someone with deeper knowledge can comment on this? Is communication a bottleneck, and will this algorithm uncover a new design space for NNs?
"Do you want to learn about blogspam? If so, you are on the right page. You will learn about blogspam here. One of the most major things about blogspam is that it exists on the Internet. You just learned that blogspam exists on the Internet. In this way, you have become educated about blogspam.
Now that you know about blogspam, we'll move on to the next topic: How to find blogspam. It's actually very easy to find blogspam. You are on the right page if you want to learn about finding blogspam..."
This general trend of deteriorating UX in widespread services is extremely exhausting and depressing.
The accuracy you could get with Google searches in 2005-2010-ish was amazing. Knowing how to Google was a secret weapon of computer literacy, you could find anything. Now I save links on a small server (I work on many machines) unless something is posted here on HN, as it's one of increasingly few places where I can search and actually find things.
Same story with YouTube, searching for a video there has become an exercise in self flagellation.
I don't know if it was Netflix that started the ball on nonsensical, avant garde categories followed by thousands of auto-playing calls to action, but combined with intrusive ads and obfuscation the day-to-day experience on mainstream internet really sucks the joy out of whatever amazing content that lurks in the cracks of what is actually a set of amazingly marvelous technology.
What's cool here - is that in order to work, you need a ton of existing infrastructure. Trying to forklift build that probably would have been a disaster. So, instead, Cloudflare bided their time building their DDOS product, which everyone wanted and was willing to pay for, which let them get operational expertise, staffing, technology, and, importantly massive internet infrastructure that had been tested the hell out of.
Then, instead of taking the big leap, they took an incremental step with WARP, (their VPN), and let consumers bang on it for a year (approx 10mm end users).
All along - they've been working towards this vision - which is really comprehensive.
Reminds me of how Amazon destroyed the original IAAS/PAAS providers Loudcloud, etc... not by trying to compete with them, but by doing 1% of what Loudcloud did, but very well, for a lot of people. Then 2%, then 4% ....
Amazing how companies that start off building small "trivial" things can use that to lever themselves up to complex and comprehensive ecosystems.
WARP, Magic Transit, CNI - all of them were precursors to this vision. Which, if they pull it off, is going to be $$$.
Dara's email was really well written, and felt as compassionate as one can for a letter from a CEO announcing job cuts.
The full email:
Team Uber:
These have been unprecedented and challenging times for everyone—our societies, our governments, our families, our economies, all around the world. They’ve also been challenging for Uber, and many of you, as you’ve waited for us to define the road ahead. I’ve said clearly that we had to take tough action to resize our company to the new reality of our business, and that I would come back to you this week with the specifics.
Today I have the specifics: we have made the incredibly difficult decision to reduce our workforce by around 3,000 people, and to reduce investments in several non-core projects. As a leadership team we had to take the time to make the right decisions, to ensure that we are treating our people well, and to make certain that we could walk you through our decision making in the sort of detailed and transparent manner you deserve.
Where we started and hard choices
We began 2020 on an accelerated path to total company profitability. Then the coronavirus hit us with a once-in-a-generation public health and economic crisis. People are rightfully staying home, and our Rides business, our main profit generator, is down around 80%. We’re seeing some signs of a recovery, but it comes off of a deep hole, with limited visibility as to its speed and shape.
You’ve heard me say it before: hope is not a strategy. While that’s easy to say, the truth is that this is a decision I struggled with. Our balance sheet is strong, Eats is doing great, Rides looks a little better, maybe we can wait this damn virus out...I wanted there to be a different answer. Let me talk to a few more CEOs...maybe one of them will tell me some good news, but there simply was no good news to hear. Ultimately, I realized that hoping the world would return to normal within any predictable timeframe, so we could pick up where we left off on our path to profitability, was not a viable option.
I knew that I had to make a hard decision, not because we are a public company, or to protect our stock price, or to please our Board or investors. I had to make this decision because our very future as an essential service for the cities of the world—our being there for millions of people and businesses who rely on us—demands it. We must establish ourselves as a self-sustaining enterprise that no longer relies on new capital or investors to keep growing, expanding, and innovating.
We have to take these hard actions to stand strong on our own two feet, to secure our future, and to continue on our mission.
I know that none of this will make it any easier for our friends and colleagues affected by the actions we are taking today. To those of you personally impacted, I am truly sorry. I know this will cause pain for you and your families, especially now. Many of you will be affected not because of the quality of your work, but because of strategic decisions we made to discontinue certain areas of activity, or projects that are no longer necessary, or simply because of the stark reality we face. You have been a huge part of this company and every day forward we will build on the foundations that you established, brick by brick.
Our decisions and the road forward
We have decided to re-focus our efforts on our core. If there is one silver lining regarding this crisis, it’s that Eats has become an even more important resource for people at home and for restaurants; and delivery, whether of groceries or other local goods, is not only an increasing part of everyday life, it is here to stay. We no longer need to look far for the next enormous growth opportunity: we are sitting right on top of one. I will caution that while Eats growth is accelerating, the business today doesn’t come close to covering our expenses. I have every belief that the moves we are making will get Eats to profitability, just as we did with Rides, but it’s not going to happen overnight.
So we need to fundamentally change the way we operate. We need to make some really hard decisions about what we will and won’t do going forward, based on a few principles:
We are organizing around our core: helping people move, and delivering things.
We are building a cost-efficient structure that avoids layers and duplication and can scale, at speed.
We are being intentional with our location strategy focused on key markets/hubs.
Mac will now lead a unified Mobility team, which will include Rides and, as of today, Transit. Mac will continue to manage our cross-cutting functions like Safety & Insurance, CommOps, U4B, and Business Development, the latter of which will be centralized across Rides, Eats, and Freight under Jen. Pierre will lead what we will call “Delivery” internally, encompassing Eats, Grocery and Direct.
Given the necessary cost cuts and the increased focus on core, we have decided to wind down the Incubator and AI Labs and pursue strategic alternatives for Uber Works. Due to these decisions, Zhenya has decided it makes sense to move on from Uber. Zhenya is customer-centric to her core, and I am deeply grateful for all of her hard work.
We are also looking at our geographic footprint. While it served us well for many years to cast a wide physical net, it’s time to be more intentional about where we have employees on the ground. We are closing or consolidating around 45 office locations globally, including winding down Pier 70 in San Francisco and moving some of those colleagues to our new HQ in SF. And over the next 12 months we will begin the process of winding down our Singapore office and moving to a new APAC hub in a market where we operate our services.
Having learned my own personal lesson about the unpredictability of the world from the punch-in-the-gut called COVID-19, I will not make any claims with absolute certainty regarding our future. I will tell you, however, that we are making really, really hard choices now, so that we can say our goodbyes, have as much clarity as we can, move forward, and start to build again with confidence.
How we are helping departing employees
As we previewed last week, we have taken a lot of feedback and worked to provide strong severance benefits and other support for those leaving Uber, like healthcare coverage and an alumni talent directory. We’re also taking care to support people in special situations a bit differently, like those on US visas or parental leaves. While the details will differ slightly by country, you can see a summary here. Every departing employee will have a 1:1 to receive the details of their individual package.
Given the global nature of these changes, and the local rules and regulations involved, the individual experience today will vary by country:
All other countries (those not listed to the right)
Argentina, China, France, Germany, India, Ireland (COE only), Italy, Kenya, Netherlands, Norway, Pakistan (Karachi only), Poland, Portugal, Slovakia, South Africa, Spain, Turkey, UK (ULL only)
In these countries, we can communicate about individual impacts today.
Everyone in these countries who is affected has already received an email, and will soon have a calendar invitation to a private meeting with a manager and HR.
If you are in one of these countries and you did not receive a separate email this morning, you are not affected.
In these countries, local laws mean that we cannot be as specific about individual impacts today.
In some countries, we will start a consultation process. In others, there are restrictions on making changes during the COVID lockdown.
If you are in one of these countries, you will get an email from Nikki describing next steps for your location.
If you are one of the many affected Uber teammates, I’ll acknowledge right here that any package we offer, regardless of how thoughtful or generous, will never replace the opportunity to belong, to make a difference, to establish the kinds of bonds you establish with any important company or cause. We wouldn’t be here without you. We will finish what you started, and we will be excited to see the great things that you will build next.
I am incredibly thankful to everyone reading this email, because the resilience and grit you’ve shown has made Uber the company it is and will continue to be. I’ve never had a harder day professionally than today, but Uber has consistently surprised me with the challenges it has thrown my way. But it’s the toughest challenges that are worthwhile, and I know even more strongly in my heart than I ever have that Uber is worth it, and more.
99.999%=What companies like Google or Yahoo can achieve
Even worse. Five nines is five minutes of downtime per year. The core Google search experience blew five nines for half a decade with just one outage -- the one where they marked the entire Internet as a malware site, which took somewhere like 40 minutes to address.
This kind of thing makes me dismiss talk of nines as fetishism or sales-speak. You can say your system is going to have five nines of uptime at the application level. You're probably lying.
P.S. Pricing-wise, a client who wants > 99.5% either wants to pay mid-six figures (and up up up) or they want something which is deeply irrational for you to offer.
I'm sorry you didn't have a good interview at Digg. My goal, then and now, is that even people who don't get an offer should leave feeling okay about the process. I clearly didn't live up to that with you, and I apologize for it.
We never made a no-hire decision based on one answer, and nobody ended an interview after one question, unless there was some sort of emergency. Site outage, fire alarm, that sort of thing.
> He went on to ruin digg.com (the rewrite everything guy)
I didn't ruin Digg. Ruining Digg was an all-hands-on-deck multi-year team effort. I wasn't involved in the decision to rewrite it and didn't agree with it. I don't know for sure who was, or what pressures were at work behind it.
What I do know is that our VPE came to me and said that we were going to rewrite Digg from scratch, and do it in six months, because the code was a mess and it took too long to do anything. Which was true.
I told him it was a terrible idea and we should figure out the end point we wanted to be at, then incrementally refactor our way towards it.
He said we'd tried that and it didn't work, so we were throwing away everything and rebuilding it from scratch. In six months.
I said that if we wanted the slightest possibility of success, we'd have to cut features to the bone and ship a minimal version we could quickly iterate on. I suggested cutting the ability to comment on stories.
He told me he didn't think we needed to do that, and we were going to ship a feature-complete version of Digg in six months, from scratch.
It was a completely bananas project, doomed from day one, and I wasn't shy about saying so — I told anyone who'd listen that I gave it a 50/50 chance of destroying the company. The promises made about what would be delivered & when were completely unrealistic and unreasonable. There was nobody articulating what we were supposed to be building, or to say no to what shouldn't get built. Into that leadership vacuum flowed a torrent of well-intentioned but (in my opinion) misdirected ideas for what the thing should be, which ate that first six months in the blink of an eye.
any other light weight trimming people have?