Honestly, if you really have to ask the question then none of this matters because it sounds like you are already delegating your career to AI which would make this list unapproachable.
Forget what AI can and cannot do. What can you do?
If you are only doing data entry into an LLM without understanding how any of this actually works then what do I need you for? I can just promote the janitor at half the cost to do your job.
Ability to think like a human who will use your product. It's the small details like looking at a web page and understanding if it "makes sense" and "looks good" and what improvement would make usability better. AI can write the code and follow best practices, but the ability to think like a real human user and be a true Product Manager is something that would always be needed IMHO
Communication and networking - i think we'll see devs expected to bridge to the BA role and deliver based on that. So being able to communicate will become more important
AI can write code but it struggles to know whether the architecture behind it is sound. People who can evaluate tradeoffs, debug distributed systems, or spot why an AI-generated solution will break at scale will be valuable for a long time.
Also anything involving trust and accountability, someone still has to own the output
Paramedicine and nursing. These roles will adapt and use AI, but because they're still so hands-on, and there's already a shortage of staff in those roles in general, I don't see job cuts there.
Judgment in ambiguous situations is the one thing that's held up consistently. AI is good at defined tasks, bad at knowing when the task definition itself is wrong.
Also, deep domain knowledge is the other one..... knowing what good output looks like in your field is something models can't fake convincingly at the edges.
You see I've never found those are the social skills that get rewarded. More arse licking the boss, or not pointing out that bad idea is a bad idea, or taking credit for someone else's work.
So yeah maybe getting along with the right people... And mutual benefit with the right people.
If LLMs have roughly peaked, then everything is safe except for things that are already being eaten away like translation and call center work.
If they haven't and we have hit the exponential growth mark, nothing is safe and even the temporarily "safe jobs" will also suffer greatly from being crunched on both the supply and demand sides (there will be more labor supply for those jobs as the displaced try to flee to safe jobs, there will be less demand for the output of those jobs because the displaced will no longer have income to pay for those goods or services). And LLMs and robots will eventually come for many of those jobs too, likely at a rate that exceeds people's ability to retrain.
Better hope that either things have peaked, or that we can somehow manage to stop treating all forms of socialism as evil or we're going to see the violent unmaking of modern society in our lifetimes.
Robots require material resources and are quite difficult to produce. I wouldn't be surprised if we go through a period of time were intellectual work is outdated and most people are back to exhausting manual work. Basically, no middle class anymore, just some elites and many manual workers doing what the AI asks. I guess, to those future elites, humans would just be self-reproducing robots. (well robots like those we have now would definitely see use, but I am not sure about the timeline for general-purpose robots that can do many things including assemble themselves).
I don't have a strong belief this will happen however, and I hope it does not.
knowing how to give AI good context. Thats the skill nobody talks about. I use Claude Code daily and the difference between a lazy prompt and a well structured doc is massive.
also just understanding how the models work. I'm doing an AI masters right now and once you know whats happening under the hood the anxiety disappears.
The answer is of course obvious, and applies to any business domain over time and hypes: how to sell, that is, being a real old-fashioned salesman, who has ability to make deals, who can bring money in.
management - it occured to me that giving instructions to agent is very similar to giving instructions to human employees - even the best of them make mistakes.
i learnt that asking claude code to "investigate for 3 potential root causes" is more effective than "investigate the root cause" in bug fix. this blows my mind as i realize that agent can be lazy, can be careless, and we can give better instruction to prevent that.
another reason why i said this is that giving enough context and defining blast boundary is more efficient than hand-holding/micromanaging and checking every tool call for agents. the management skill for human employees also works here.
critical thinking - you just need to have your judgement on the seemingly solid but actually halluncinated agent bs.
Killing/rescuing people with your brain instead of bullets and/or creating/exploding structures.
Join the Army. Become a Combat Engineer Sergeant.
Enjoy getting told by your superiors that they are afraid of sitting in the same room with you, because your thinking cap gaze looks like you are always plotting to kill them in the most sophisticated and fun way imaginable. Never say a word, just give them a big friendly smile in return.
Leave with a treasure trove of abilities useful for the rest of your life, or to simply troll your neighbors, and give lifelong work to a local psychotherapist.
Psychology, psychiatry, medical, construction, auto repair, at least in the short term. The jury is still out on the long-term view which is a bit hazy at the moment :(
I agree with construction and auto repair, but why psychology and psychiatry? If there's anything that's perfect for LLMs, it self-diagnosis and self-treatment by chatting with them. Other than prescribing drugs, an AI system could do everything a psychologist or psychiatrist does.
The only significant barrier is that it's not condoned by the medical establishment and by law (which I imagine will indeed take a few years to work around).
Those are good points, and true to be sure. But I specified that in the short term they are future proof. Long term, no one can predict.
I, just feel like LLMs are not currently at the point where the medical profession can trust them with most things medical, including a psychological diagnosis since they are habitually hallucinating. This is why some of the medical professions, including those mentioned above, are safe in the short term, more or less. By the way, you can see the disclaimers by all these chat agents that they are not medical professionals. It's more of legal-protection clause than caring advice, obviously
One wrong diagnosis or comment, and the patient could either do self-harm or harm others, given the lack of real care available and the amount of people suffering from mental illnesses, due to societal pressures.
Honestly, given the pace of all things AI, I don't see any profession to be AI-proof.
It depends on the level, though. You can easily ask AI to "Calculate intersection with X-axis for sin(2πx)" and I found many and I mean MANY errors in my textbook.
Metrology, mechanical and materials science engineering, manufacturing and tool engineering, precision engineering, and electrical and electronics engineering, combined with being a generalist and having one specialization in physical or hardware engineering along with computation.
As people often say, matter, energy, and information are the fundamentals of everything. I think we need mathematics, analytic philosophy, the arts and humanities, and physics too. Sorry we need every skill. /s
> Metrology, mechanical and materials science engineering, manufacturing and tool engineering, precision engineering, and electrical and electronics engineering, combined with being a generalist and having one specialization in physical or hardware engineering along with computation.
Now how does one get that if they aren’t an 18 year old in college with years and gorillions of dollars in government money to blow on an EE/CE program.
* leadership
* data structures
* task/project management
* performance/measurements
* data transmission techniques
Honestly, if you really have to ask the question then none of this matters because it sounds like you are already delegating your career to AI which would make this list unapproachable.
If you are only doing data entry into an LLM without understanding how any of this actually works then what do I need you for? I can just promote the janitor at half the cost to do your job.
Often better than many developers I've worked with come up with.
In my team, we need to redesign our products because the main user is AI rather than humans.
Also anything involving trust and accountability, someone still has to own the output
As examples, check out:
Cosinuss: https://www.cosinuss.com/en/
Medictool: https://www.medic-tool.com/
LifesaverSim: https://www.lifesaversim.com/
But for the title question I’d say building houses.
Also, deep domain knowledge is the other one..... knowing what good output looks like in your field is something models can't fake convincingly at the edges.
So yeah maybe getting along with the right people... And mutual benefit with the right people.
If they haven't and we have hit the exponential growth mark, nothing is safe and even the temporarily "safe jobs" will also suffer greatly from being crunched on both the supply and demand sides (there will be more labor supply for those jobs as the displaced try to flee to safe jobs, there will be less demand for the output of those jobs because the displaced will no longer have income to pay for those goods or services). And LLMs and robots will eventually come for many of those jobs too, likely at a rate that exceeds people's ability to retrain.
Better hope that either things have peaked, or that we can somehow manage to stop treating all forms of socialism as evil or we're going to see the violent unmaking of modern society in our lifetimes.
I don't have a strong belief this will happen however, and I hope it does not.
also just understanding how the models work. I'm doing an AI masters right now and once you know whats happening under the hood the anxiety disappears.
bottom line: learn it and embrace it.
management - it occured to me that giving instructions to agent is very similar to giving instructions to human employees - even the best of them make mistakes.
i learnt that asking claude code to "investigate for 3 potential root causes" is more effective than "investigate the root cause" in bug fix. this blows my mind as i realize that agent can be lazy, can be careless, and we can give better instruction to prevent that.
another reason why i said this is that giving enough context and defining blast boundary is more efficient than hand-holding/micromanaging and checking every tool call for agents. the management skill for human employees also works here.
critical thinking - you just need to have your judgement on the seemingly solid but actually halluncinated agent bs.
Join the Army. Become a Combat Engineer Sergeant.
Enjoy getting told by your superiors that they are afraid of sitting in the same room with you, because your thinking cap gaze looks like you are always plotting to kill them in the most sophisticated and fun way imaginable. Never say a word, just give them a big friendly smile in return.
Leave with a treasure trove of abilities useful for the rest of your life, or to simply troll your neighbors, and give lifelong work to a local psychotherapist.
Big friendly smile. Two thumbs up.
I guess the enlistment age has been raised to 42 so this may actually be a realistic option for more people on this site lol.
Of course I was born just in time to be loaded up on psych meds as a child, so the military didn’t want me.
Have seen some smart comments from you. I am sure you’re doing fine.
Maybe try again, at least you were on psych meds.
You can become commander-in-chief these days by being off your meds. We live in interesting times.
The only significant barrier is that it's not condoned by the medical establishment and by law (which I imagine will indeed take a few years to work around).
I, just feel like LLMs are not currently at the point where the medical profession can trust them with most things medical, including a psychological diagnosis since they are habitually hallucinating. This is why some of the medical professions, including those mentioned above, are safe in the short term, more or less. By the way, you can see the disclaimers by all these chat agents that they are not medical professionals. It's more of legal-protection clause than caring advice, obviously
One wrong diagnosis or comment, and the patient could either do self-harm or harm others, given the lack of real care available and the amount of people suffering from mental illnesses, due to societal pressures.
Honestly, given the pace of all things AI, I don't see any profession to be AI-proof.
bootlicking - to get promoted after you find your gig
good communication/leadership - to keep yourself in that high position
As people often say, matter, energy, and information are the fundamentals of everything. I think we need mathematics, analytic philosophy, the arts and humanities, and physics too. Sorry we need every skill. /s
Now how does one get that if they aren’t an 18 year old in college with years and gorillions of dollars in government money to blow on an EE/CE program.