I wonder if it’s time to start seriously looking at why some people have completely embraced copilot/chat-gpt and others can not get it to work at all.
I’d love to see which of the following might have any correlations with reported usefulness of chatgpt/copilot:
1. How articulate a person is.
2. How much of an expert they are in what they are trying to do with the tool.
3. How successful they are at giving instructions to a human to perform the same task.
4. How much experience they have managing/coaching junior devs/interne/newbies.
5. How much experience someone has at decomposing a problem into smaller parts and identifying the simple parts and complicated parts
There are huge consequences to either answer to the question: “Is using an AI tool effectively a coachable skill?”. I’m sure someone has already looked into this or is looking into this - if it turns out to be a coachable skill, and we can identify what those are, there’s a lot of money to be made in bringing a coursework to market.
I’d love to see which of the following might have any correlations with reported usefulness of chatgpt/copilot:
1. How articulate a person is.
2. How much of an expert they are in what they are trying to do with the tool.
3. How successful they are at giving instructions to a human to perform the same task.
4. How much experience they have managing/coaching junior devs/interne/newbies.
5. How much experience someone has at decomposing a problem into smaller parts and identifying the simple parts and complicated parts
There are huge consequences to either answer to the question: “Is using an AI tool effectively a coachable skill?”. I’m sure someone has already looked into this or is looking into this - if it turns out to be a coachable skill, and we can identify what those are, there’s a lot of money to be made in bringing a coursework to market.