I said in my 2024 predictions that AI advancements would usher in a year when people would start to rethink their jobs and identity.
Because for most of us, work is life, and life is work. So also shows recent McKinsey data:
“70 percent of surveyed employees said their sense of purpose is defined by their work, and the leading driver of performance and productivity is the sense of purpose work provides.” – McKinsey
And, as we shared on Tuesday in our AI & Careers newsletter, Stay Ahead, experts now agree that we need to “Embrace The Fact That AI Will Dominate Careers In 2024.”
As that article discusses, some jobs will be replaced, something I’ve highlighted before:
- From Coders to Writers: Jobs AI Will Replace.
- A Jobless Future? Interrogating Musk's AI Prophecy.
- The 3-Day Workweek is Here, Says Bill Gates. I Agree.
- AI, the era of the 1-person unicorn and massive job losses. (on e27)
- Software engineers were more valuable than capital, AI may change that. (on CNBC)
Overnight, a sudden need to rethink jobs and people’s identity dawned on many videographers, SFX artists, and other creators when OpenAI dropped Sora, its new video AI engine.
This week, let’s look at what Sora is and does, and how it impacts our world of work.
What is Sora and What Does It Do?
Well, since a picture says more than a thousand words, let’s look at this demo from OpenAI:
And if you have seen most of those already, check out this Eleven Labs one-up, where they put AI-generated audio on top of AI-generated video:
Sora is, what OpenAI calls, “an AI model that can create realistic and imaginative scenes from text instructions.”
Simply said:
- You can now prompt a minute-long video like creating an image in Dall-E or Midjourney.
- Instead of writing a prompt, you can also give it an existing photo or video and ask it to create either the minute leading up to it or the following.
- Sora then creates the video, exactly as you described it. (It can even create perfect loops)
And it’s not like prompt-to-video is new, as we’ve had RunwayML, Pika Labs, and Stable Diffusion for a year now.
Just look at this video that Creative Director Alastair Green created with Stable Diffusion Video based on Midjourney AI art from David Goode.
Still, Sora’s videos' length and level of realism scared people – especially in the creative industry.
Their anxiety is compounded by the speed at which AI is developing, something Sora underscored and MKBHD highlighted in his comparison of 2023 Video AI vs. Now:
“Remember when we said, okay, this AI stuff is cool and all, but clearly there's a long way to go before there's any need for concern? Well, welcome to the future people!” – MKBHD
Welcome to the future, indeed. A future, where jobs we thought wouldn’t be affected by AI for a long time to come, soon are.
7 Things You Don't Know About Sora
I assume you may have seen some of the Sora videos already, but here are seven things you may have missed or didn’t think about yet:
1. Sora Is Going to Kill A Lot of Creative Jobs
To continue on the previous point, Sora's ability to generate high-quality video content from text prompts could lead to significant shifts in the creative job market.
Traditional roles in videography, special effects, and animation may face redundancy.
Maybe creatives can adapt by developing skills in AI supervision, ethical AI usage, and creative direction that leverages AI capabilities. But will that be the case for everyone?
Because I know from my advertising days that we easily would work with 50 to a hundred people on a shoot, resulting in a 30-second TV commercial.
It’s not just the cameraman or visualizer who will be worried – it’s an entire industry around them.
In discussions amongst filmmakers and VFX artists on Reddit, many in this industry see the potential impact of what Sora can do now and what if it advances further.
“This shit is leapfrogging all our workflows. I am not sure what the point of spending a decade becoming good at something in vfx is when the writing is on the wall. It is so depressing that AI is being used to automate human creativity.” – Redditor RandVR
And:
"Ouch. that really is depressing... I will be out of a job in the next 5 to 10 years." – Redditor mtojay
And OpenAI seems to know it’s going to kill a lot of jobs.
As OpenAI Technical Staff member Yonadav Shavit shared, the company is “very intentionally are not sharing it widely yet - the hope is that a mini public demo kicks a social response into gear.”
We very intentionally are not sharing it widely yet - the hope is that a mini public demo kicks a social response into gear
— Yo Shavit (@yonashav) February 15, 2024
Well, that social response is now definitely underway in circles where this technology has the most impact.
Because we’re talking a serious step up here. To create that one-minute video from a prompt, OpenAI’s Sora generates ten billion data points, as Fireship also explains here:
Being able to transform and create that amount of data in mere minutes showcases the significant leap in AI's capability to understand and simulate the physical world.
Even people outside the creative industry may realize that “if it can do that,” what else AI might be capable of?
2. Ethical and Societal Challenges
Another major concern is how this affects ethical and societal challenges, especially in an election year.
The realistic videos generated by Sora amplify concerns about deepfakes and their potential use in spreading misinformation, manipulating public opinion, and eroding trust in media.
And while better detection tools, watermarks, and upcoming legal frameworks could lower those risks, the speed at which a high volume of content can be created and pushed to platforms like TikTok is frankly frightening.
The AI Breakdown’s Nathaniel Whittemore summarizes well why the public resistance towards AI is much higher than when other AI tools like ChatGPT were released:
As he notes, because video is a much more important medium for most, Sora is getting outsized negative reactions.
3. We May Simply See More Creativity – Including Personalized Content
With one of the more negative implications out of the way, let’s look at a potential upside.
By lowering the technical and financial barriers to video production, Sora could enable more people to create quality content.
This democratization could lead to a surge in diverse and innovative content.
And yes, this would challenge established media houses and content creators to adapt and innovate, but that may not be a bad thing.
As Fireship explains, we may see applications like zooming out from an existing video, creating perfect loops, and changing only certain details in a video (think: make this take place in the 1920s”.)
And it’s not only about creating content for others. This may also lead to us creating personalized content.
We could take a story, and with the right backing from IP holders even our favorite characters, generate it for ourselves to enjoy.
Add to that the Apple Vision Pro, and you could imagine open-ended, highly immersive worlds, in which we are the ultimate creator.
So, let’s count down to our own AI Netflix!
4. A Completely New Take on Learning & Development
Back in our workplaces, we could imagine the same world-creating capabilities as a powerful tool for learning and development.
Just as educational applications dominate the top AI tools, we could see video generation based on prompts as a powerful learning & development tool.
Sora could create roleplaying, coaching, and visualizing experiences before they occur on the fly.
5. Sora Could Usher In AGI
Some experts think Sora is way more than a creative tool.
NVIDIA Senior Research Scientist Dr. Jim Fan said that:
“If you think OpenAI Sora is a creative toy like DALLE, ... think again. Sora is a data-driven physics engine. It is a simulation of many worlds, real or fantastical. The simulator learns intricate rendering, "intuitive" physics, long-horizon reasoning, and semantic grounding, all by some denoising and gradient maths.”
He explains this view by breaking down the ‘pirate ships in a coffee cup’ video:
- The simulator instantiates two exquisite 3D assets: pirate ships with different decorations. Sora has to solve text-to-3D implicitly in its latent space.
- The 3D objects are consistently animated as they sail and avoid each other's paths.
- Fluid dynamics of the coffee, even the foams that form around the ships. Fluid simulation is an entire sub-field of computer graphics, which traditionally requires very complex algorithms and equations.
- Photorealism is like rendering with raytracing.
- The simulator considers the cup's small size compared to oceans and applies tilt-shift photography to give a "minuscule" vibe.
- The semantics of the scene do not exist in the real world, but the engine still implements the correct physical rules that we expect.
As Sora is simulating our world, rather than just generating videos, there’s an interesting argument that this could contribute to AGI.
(AGI is the state where machines have more intelligence than all humans combined, allowing them to do any cognitive task humans can currently do.)
6. Sora Could Also Become the Best Image Creator
Already, 13% of US Knowledge Workers use Adobe Firefly, and image generation is the 5th biggest use case for current AI tool usage globally.
Tools like DeepAI, Midjourney, Canva Image Generator, and Looka all capture major user bases.
But Sora may steal some shine here.
Because as AI Explains… explains… Sora is not just video; it’s also the best image generator.
It has to create those billion data points anyway, so it could just as easily give you the still frame. (A movie is just a sequence of still frames.)
7. Technology May Start Moving Faster Than Our Ability to Adapt
The ability of Sora to combine and generate videos has been described as a massive leap forward, catching many by surprise.
The public response to Sora shows that the pace of technological advancement in AI is moving faster than our own growth in culture, understanding, and adaptation to it.
This is in part because the development of AI technologies, including Sora, is not linear but exponential.
Each new iteration builds upon the last, accelerating progress in ways that are increasingly difficult to predict.
This is exactly why a 3-day workweek sounds very credible.
And also highlights why it’s so important to not be left behind, and start experimenting with AI today.
Bottom Line
In short, I understand why there’s such a backlash about Sora – it’s such a massive change in AI’s capabilities.
This OpenAI release truly challenges us to critically examine the implications of AI on society and our professional, ethical, and personal lives.
What do you think? Join the discussion by clicking here.
You Might Also Like …
Future Work
A weekly column and podcast on the remote, hybrid, and AI-driven future of work. By FlexOS founder Daan van Rossum.
Why 100% of Fortune 500 Companies Embrace Hybrid Work (with Sodexo's Henrik Järleskog)
From Recruiter to Talent Strategist: Technology, Personalization, and Human Connection
Our latest articles
FlexOS helps you stay ahead in the future of work.