Hacker Newsnew | past | comments | ask | show | jobs | submit | joelwilliamson's commentslogin

Function colouring, deadlocks, silent exception swallowing, &c aren’t introduced by the higher levels, they are present in the earlier techniques too.

Function coloring also only applies to a few select languages. If your runtime allows you can call an async function from a sync function by pausing execution of the current function/thread whenever you're waiting for some async op.

Libraries like Tokio (mentioned in the article) have support for this built-in. Goroutines sidestep the issue completely. C# Tasks are batteries included in that regard. In fact function colors aren't an issue in most languages that have async/await. JavaScript is the odd one out, mostly due to being single-threaded. Can't really be made to work in a clean way in existing JS engines.


“Function coloring” is an imaginary issue in the first place. Or rather it's a real phenomenon, but absolutely not limited to async and people don't seem to care about it at all except when talking about async.

Take Rust: you return `Result<T,E>`, you are coloring your function the same way as you are when using `async`. Same for Option. Errors as return values in Go: again, function coloring.

One of your nested function starts taking a "serverUrl" input parameter instead of reading an environment variable: you've colored your function and you now need to color the entire call stack (taking the url parameter themselves).

All of them are exactly as annoying, as you need to rewrite the entire call stack's function signature to accommodate for the change, but somehow people obsess about async in particular as if it was something special.

It's not special, it's just the reflection that something can either be explicit and require changing many function signatures at once when making a change, or be implicit (with threads, exceptions or global variables) which is less work, but less explicit in the code, and often more brittle.


Function coloring does not mean that functions take parameters and have return values. Result<T,E> is not a color. You can call a function that returns a Result from any other function. Errors as return values do not color a function, they're just return values.

Async functions are colored because they force a change in the rest of the call stack, not just the caller. If you have a function nested ten levels deep and it calls a function that returns a Result, and you change that function to no longer return a result because it lost all its error cases, you only have to change the direct callers. If you are ten layers deep in a stack of synchronous functions and suddenly need to make an asynchronous call, the type signature of every individual function in the stack has to change.

You might say "well, if I'm ten layers deep in stack of functions that don't return errors and have to make a call that returns the error, well now I have to change the entire stack of functions to return the error", but that's not true. The type change from sync to async is forced. The error is not. You could just discard it. You could handle it somehow in one of the intervening calls and terminate the propagation of the type signature changes half way up. The caller might log the error and then fail to propogate it upwards for any number of reasons. You aren't being forced to this change by the type system. You may be forced to change by the rest of the software engineering situation, but that's not a "color".

For similar reasons, the article is incorrect about Go's "context.Context" being a coloration. It's just a function parameter like anything else. If you're ten layers deep into non-Context-using code and you need to call a function that takes a context, you can just pass it one with context.Background() that does nothing context-relevant. You may, for other software engineering reasons, choose to poke that use of a context up the stack to the rest of the functions. It's probably a good idea. But you're not being forced to by the type system.

"Coloration" is when you have a change to a function that doesn't just change the way it interacts with the functions that directly call it. It's when the changes forcibly propagate up the entire call stack. Not just when it may be a good idea for other reasons but when the language forces the changes.

It is not, in the maximally general sense, limited to async. It's just that sync/async is the only such color that most languages in common use expose.


> If you are ten layers deep in a stack of synchronous functions and suddenly need to make an asynchronous call, the type signature of every individual function in the stack has to change.

well, this isn't really true - at least for Rust:

runtime.block_on(async{});

https://docs.rs/tokio/latest/tokio/runtime/struct.Handle.htm...


See my other post about the point. If you "just" turn an async back into a sync call by completely blocking the async scheduler, yes, you've turned the async call back into a sync call, but you've done that by completely eliminating async-ness. That's not a general solution. That is exactly what everyone back when Node was promoting this style spent paragraph upon paragraph warning you not to do, because it just punts by eliminating the asyncness entirely. "Async is great when you completely discard it" is not a winning argument for async... which is OK, because it's not the argument anyone is making.

> If you "just" turn an async back into a sync call by completely blocking the async scheduler,

I am not doing that. The caller (which is the only one being blocked here) is sync anyways and just wants to call an async function, so no async scheduler is blocked.


In Kotlin, it’s runBlocking {<asynchronous-code>}.

This is a language specific problem, not a language pattern one.


If you are ten nested functions deep in sync code and want to call an async function you could always choose to block the thread to do it, which stops the async color from propagating up the stack. That's kind of a terrible way to do it, but it's sort of the analog of ignoring errors when that innermost function becomes fallible.

So I don't buy that async colors are fundamentally different.


You can exit an async/IO monad just like you can exit an error monad: you have a thread blocking run(task) that actually executes everything until the future resolves. Some runtimes have separate blocking threadpools so you don't stall other tasks.

If you have something in a specific language that does not result in having to change the entire call stack to match something about it, then you do not have a color. Sync/async isn't a "color" in all languages. After all, it isn't in thread-based languages or programs anyhow.

Threading methodology is unrelated though. How exactly the call stack is scheduled is orthogonal to the question of whether or not making a call to a particular function results in type changes being forced on all function in the entire stack.

There may also be cases where you can take "async" code and run it entirely out of the context of any sort of sceduler, where it can simply be turned into the obvious sync code. While that does decolor the resulting call (or, if you prefer, recolor it back into the "sync" color) it doesn't mean that async is not generally a color in code where that is not an option. Solving concurrency by simply turning it off certainly has a time and place (e.g., a shell script may be perfectly happen to run "async" code completely synchronously because it may be able to guarantee nothing will ever happen concurrently), but that doesn't make the coloration problem go away when that is not an option.


You are stuck in a fixed pattern of thinking where async==color. Here's the meme origin: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...

Here's the list of requirements: 1. Every function has a color. 2. The way you call a function depends on its color. 3. You can only call a red function from within another red function. 4. Red functions are more painful to call. 5. Some core library functions are red.

You are complaining about point 3. You are saying if there's any way to call a red function from a blue function then it's not real. The type change from sync to async is not forced any more than changing T to Result<T,E>. You just get a Promise from the async function. So you logically think that async is not a color. You think even a Haskell IO-value can be used in a pure function if you don't actually do the IO or if you use unsafePerformIO. This is nonsense. Anything that makes the function hard to use can be color.


And you appear to have read my post way too hastily to get to the point you wanted to make, because I can't even correlate what you're saying to what I actually said, particular the argument that I'm "stuck" on async being the only color.

I have inside information about how that is not the case since I typed, but deleted before posting, some points about how Haskell is one of the few languages that can create arbitrary numbers of colors in libraries, but it started spiraling when I started trying to characterize when a new color was created. It is, for instance, not as simple as "it's monadic", because types that implement "monad" that allow extracting of the value like List or Maybe don't create colors even though IO does, so I just deleted it, especially when my thoughts turned to trying to explain how Haskell is also one of the few languages that can abstract over color. (Although I gather zig is giving it the college try with sync/async.) Using monad as an example is just an invitation to trouble since way more people think they understand it than actually do.

A function simply taking an annoying amount of parameters is not a color problem, because the entire essence of "color" is the transitivity. If you don't have the transitivity flowing up the call stack, you don't have color. Just being "hard to use" is not sufficient... the proof of which being that we've had "hard to call" functions for many more decades than we've had widespread "color" in our functions and nobody was talking about this transitivity problem until we started having async functions. That is a new problem... I mean, new on the relevant time frames... the whole color thing has been ongoing for years now, but it's still relatively new.


A function taking a new parameter that you have to pass all the way down the call stack is a color. If you have a large Haskell application, and then you decide that something 50 functions deep needs to access the user database, you've added a color. It's color if it affects the whole call stack in reality. You could pass an empty user database, but you obviously won't.

But we end up talking past each other if we're using different definitions of the word "colored".

Returning errors isn't function coloring, it's fundamental language design choice by go.

You can still use a function that returns result in a function that uses option.

And result and option usually mean something else. Option is a value or none. None doesn't necessarily means the function failed. Result is the value or an error message. You can have result<option, error>

That's different then async where you can call the other type.


Function coloring is an effect. If the language makes a distinction between sync and async, then it has that effect. Just because there are escape hatches to get around one effect doesn't really change this fact.

Like in Haskell there is the IO monad used to denote the IO effect. And there are unsafe ways to actually execute it - does that make everything in Haskell impure?


I wish the “Function coloring” meme died. It made sense in the context of the original blog post (which was about callback hell, hence the “4. Red functions are more painful to call” section un the original blog post), but doesn't make sense in the context of async/await. There's literally nothing special with async, it's just an effect among many others.

As soon as you start using function arguments instead of using a global variable, you are coloring your function in the exact same way. Yet I don't think anyone would make the case that we should stop using function arguments and use global variables instead…


I think the lesson is to be careful about introducing incompatibility via the type system. When you introduce distinctions, you reduce compatibility. Often that’s deliberate (two functions shouldn’t be interchangeable because it introduces a bug) but the result is lots of incompatible code, and, often, duplicate code.

Effects are another way of making functions incompatible, for better or worse. It can be done badly. Java fell into that trap with checked exceptions. They meant well, but it resulted in fragmentation.

Sometimes it’s worth making an effort to make functions more compatible by standardizing types. By convention, all functions in Go that return an error use the same type. It gives you less information about what errors can actually happen, but that means the implementation of a function can be modified to return a new error without breaking callers.

Another example is standardizing on a string type. There are multiple ways strings can be implemented, but standardization is more important.


You can also use type inference with union types like ZIO. So you could e.g. return a Result where the error type is `DatabaseError | InvalidBirthdayError`. If you're in an error monad anyway, and you add a new error type deep in the call stack, it can just infer itself into the union up the stack to wherever you want to handle it.

That will help locally, but for a published API or a callback function where you don't know the callers, it's still going to break people if you change a union type. It doesn't matter if it's inferred or not.

IIRC ZIO solution is actually to return a generic E :> X|Y. Caller providing the callback knows what else is on E, and they're the only one that knows it so only they could've handled it anyway. You still get type inference.

Or if you mean that returning a new error breaks API compatibility, then yes that's the point. If now you can error in a different way, your users now need to handle that. But if it's all generic and inferred, it can still just bubble up to wherever they want to do that with no changes to middle layers.


If you declare specific error types and callers only write handlers for specific cases, then adding a new error breaks them. If you just declare a base error type in your API, they have to write a generic error handler or it doesn't type check.

In this way, declaring a type guides people to write calling code that doesn't break, provided you set it up that way. It makes things easier for the implementation to change.

Sometimes you do need handlers for specific errors, but in Go you always need to write generic error handling, too.

(A type variable can do something similar. It forces the implementation to be generic because the type isn't known, or is only partially known.)


Usually in Scala errors subtype Throwable, so as long as new union members continue to do so, if you wanted a generic handler (e.g. you just log and return) you could handle that. But you can also be more specific with actual logic, with the benefit that if you choose to do so and the underlying implementation changes, you detect it.

Go also basically requires you to write actually generic error code (if err != nil return err), so it feels like errors are more work than they are. In Scala (especially ZIO) propagation is all automatic and you just handle it wherever you'd like. The only code that cares about errors is the part generating them and the part handling them. But it's all tracked in the type system so you can always see what exact errors are possible from what methods. And it's all simple return values so no trickiness with exceptions (e.g. across async boundaries).


I mean, the very point of a type system is to introduce distinctions and reduce compatibility (compatibility of incorrectly typed programs).

Throwing the baby out with the water like what go sort of does with its error handling is no solution. The proper solution is a better type system (e.g. a result type with a generic handles what go can't).

For effects though, we need a type systems that support these - but it's only available in research languages so far. You can actually just be generic in effects (e.g. an fmap function applying a lambda to a list could just "copy" the effect of the lambda to the whole function - this can be properly written down and enforced by the compiler)


Async/await will be equivalent to parameters when they are first class and can be passed in as parameters. Language syntax and semantics are not equivalent and colored functions are colored by the syntax. Zig avoided colored functions by doing something very much like this.

Using globals or arguments is a free choice independent of the context. If I call async code I don't have a choice.

async/await is just syntax-sugar callback hell

Sure. But syntax sugar that allows you to write functions whose content isn't indented 300 columns to the right whose flow of control is much easier to reason about. <shrugs>

Who calls WWI, stagflation or Covid good old times? Only the post-WWII boom was really a good time.


When did that happen? Did they admit guilt in the big settlement, or was there a different case?


I think tourer was arguing that the Nazis were a template for how to use speech restrictions to maintain power.


GitHub is a poor example of Chinese internet restrictions, since access to it is usually fine from China.


They could open source it and not even have a Github project associated. Just provide a read-only git repo on anthropic.com or drop a source tarball every release.


Then a ton of vibe-coded Claude Code forks out of their control would pop up on GitHub and people would be even more frustrated at Anthropic for not fixing their issues.


Give 10 years of copyright for free, then a $1000 fee for the next decade, and make every subsequent decade 100x more expensive.


Nah, there's no reason why trillion dollar companies should be allowed to pay anything to keep our shared culture locked up. Doing so only hinders innovation and the creation of new works. 14 years was long enough back when global distribution was unimaginable and any distribution at all was highly expensive.

Today you can instantly distribute media to the entire planet at near zero expense. If you can't make money after a decade you have only yourself or your product to blame. Also, it's not as if once something goes into the public domain all income stops either. With even a small amount of effort creators can continue to successfully package and sell their stuff to the fans even when it's avilable for free. It's worked on me several times in fact.


Even Amazon Prime’s catalogue is only a third the size of what Netflix had 15 years ago.


Have you tried Real World Haskell?


No, but the table of contents looks promising, thanks!


See also What I Wish I Knew When Learning Haskell: https://sdiehl.github.io/wiwinwlh/

It's more up to date.


He’d be dead either way, the question is if having those three years were a net improvement to his life


Not for us to question or answer though.


By that logic we should invoke the death penalty for everyone who has been sentenced to life in prison and has exhausted all their appeals, or any seniors convicted of a crime.

Their life probably won't improve anymore, and in the latter case they're going to die in a few years anyway, so might as well just lighten the load on society?


No, you'd let them decide if they want to die.


Putting that up for discussion makes the world worse than any suffering that may be experienced during that time.


3 years living vs dying is a 3 year net improvement on life. Such silly statement.

By your logic we should kill everyone at their peak.


I've known at least 2 old persons who were literally looking forward to their death because of chronic pain and general boredom and frustration of requiring 24h/7 assistance and not being able to live the way they used to.

They would have likely used assisted suicide if it had been an option back then.


In this case man doesn't want to die but others are suggesting it to make it easier on society.


Which is really the scariest part about this while discussion. Already plenty NPCs repeating how expensive it is to keep old people alive, it's not a matter of time until old people will be encouraged to make the right choice - or have it made for them if they are not capable of making it.


On the contrary, I urge you to consider whether it is your statement that is overly dismissive. Is there perhaps some existing conditioning, maybe in the form of religious upbringing that is driving your reaction to this? Many of us in fact find OP's a very thoughtful comment than a "silly statement".

> By your logic we should kill everyone at their peak.

No, they suggested that the old and ailing whose quality of life has deteriorated to the point where there is no hope or no more joy in living, ought to be given the choice.

Let me end by quoting my favourite lines from the HN guidelines:

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."


The problem is on your end.

Consider the following scenarios:

There is a red button that orders your euthanasia. Pressing it instantly teleports you to a euthanasia facility and leads to your death unless you say no within 30 seconds. The button reads your fingerprint and can only be pressed by you. (Assume science fiction level technology to make this true)

1. The button is located 5000 km away from you in an unknown location.

2. The location is known.

3. You can order the delivery of the button to you for $50

4. The button is in your basement

5. The button is next to your bed

6. The button is on your keyboard and mouse

7. The button is on your keychain

Now consider there is a blue button with the same rules as above, which makes you feel compelled to press the first button for a day and it can be pressed by anyone.

You'd want the red button as far away from you as possible and the blue button secured in a location that is as inaccessible to others as possible.

In today's society there are too many people obsessed with pressing blue buttons. Also, pressing blue buttons is not a crime, because red buttons happen to be pretty far away from most people.

But now there are people obsessed with pressing red buttons. They want to ship the red button to your house on your behalf, while thinking they are doing you a favor.

This would be okay if the blue button pressing people were a minority and there was a punishment for pressing blue buttons, but it turns out both positions are popular and when averaged together, the buttons will be placed next to each other, thereby turning the blue button into a second red button.


I see nobody obsessing about pushing red buttons. I see people that would like for option #3 to exist. And when death approaches, option #5.

A simple test of how people feel: Consider the twin towers. We saw quite a few people choosing jumping over fire. We do not question people making such a choice. It is the same choice, just on a much more compressed time scale.

(And we have the bonkers case out of WWII: the guy survived apparently uninjured. Someone who made the choice and was still around to ask them why. We don't know exactly what happened, no analysis was made at the time but attempting to reconstruct the situation said he probably hit the outer part of a pine tree and then rolled down a snowbank. He had on heavy clothing and had blacked out during the fall--not exactly surprising as he jumped from 18,000'.)


> The problem is on your end

followed by

> There is a red button [...] buttons [...] button [...] button [...] button [...] button [...] button [...] button [...] button [...] button [...] button [...] buttons [...] buttons [...] button [...] button [...] button [...] button [...] button [...] button [...] button [...] button [...] second red button

Not sure the problem is on their end!


That is a very good analogue.


They are suggesting a man who is making life hard on others should die for society which I think is wrong. No one is saying that those who choose to die shouldn't have that choice rather it's not society who should be making the choice.


In medical research on treatments the outcome is often measured in quality adjusted years of life, because just keping people alive at any cost is a bad metric.


3 years of living in constant pain - not saying it’s the case here - is not better than being dead to some people.


That's literally a one-dimensional analysis. Are you sure you're not missing any other relevant factors?I find it hard to believe you uncritically think 'more = better' in every context.


More doesn't equal better but it is no one choice but the person. Not society or the medical system assigning a quality of life score.


A beautiful woman dies twice as the old saying goes.

While what you say is extreme there is a point in the decline past which there is no point of living. If you have something worth living for - cling to life and to 107 if you like. But if the only thing that waits you is to slowly decay and fade and lose yourself - what is the point?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: