The gaming industry has long used AI in its products and tools. This would suggest that the incorporation of Generative AI, the newest iteration, would be more straightforward than it’s reported to be in many other industries, especially given its potential to transform the development process. But our research—20 interviews across game studios from AAA (large, high-budget publishers) to indie—found a consistent pattern: organizations began by empowering individuals and automating existing, tractable tasks, but struggled when workflows became more complicated and spanned multiple teams. Progress came when individuals used AI to reach across domain boundaries, completing adjacent tasks themselves. The most dramatic results, however, came from studios designed around AI from day one, where small generalist teams replaced specialist silos and cycle times collapsed from months to weeks. The implication: don’t stop after asking “what can we automate?” Also consider “how would we build if we designed around AI from day one?”
Cite as:
Ahmed, Zimran, Beyond Copy-and-Paste: How Game Studios Are Reorganizing Around AI (April 07, 2026). Available at https://gail.wharton.upenn.edu/wp-content/uploads/2026/04/Beyond-Copy-and-Paste-How-Game-Studios-Are-Reorganizing-Around-AI.pdf
How studios actually adopt AI
We conducted 20 structured interviews (60–90 minutes each) with product managers, engineers, designers, CTOs, and executives across AAA, mid-size, and indie game studios in the US and EU — all with at least some AI adoption. We spoke directly with practitioners to understand what was happening on the ground three years after ChatGPT’s launch.
Every studio we studied began its generative AI journey the same way: provisioning a secure LLM tool like ChatGPT Enterprise and offering it to employees, usually without training or structured experiments. These early “copy-and-paste” deployments — where a user puts context into a chat window, gets a result, and manually carries it back into their work — exposed people to the technology and let individuals experiment. But they left the organization structurally unchanged.
When studios tried to move beyond individuals to automate full workflows, they hit a consistent wall: tacit knowledge — the unwritten rules and institutional memory that live in people’s heads rather than in any document. Multi-person workflows run on unwritten rules and institutional memory. Before AI can automate reliably, someone must extract and codify all of that — a genuinely difficult task, and one that some employees, understandably, were reluctant to assist with.
Progress came from a different direction. Individuals began using AI to complete tasks that previously required other teams — a product manager writing complex data queries, an engineer generating 2D art assets, a finance controller automating a reporting process that used to take three days. These bottom-up, boundary-crossing efforts went beyond copy-and-paste and meaningfully changed what individuals could accomplish.
The Four Stages of AI Adoption
-
Copy-and-paste AI
Studios provision secure LLM tools and offer them broadly. Individuals gain productivity, but each conversation starts from scratch and a human must shuttle information in and out. The organization operates as before.
-
Workflow pilots
Organizations attempt top-down process automation. Progress stalls on tacit knowledge extraction and employee resistance.
-
Read/Write AI — boundary crossing
Individuals use AI to complete tasks that previously required other teams. Knowledge compounds in shared documents over time, and failure rates drop as context accumulates. One PM reduced data query failures from 70–80% to 5% over several months.
-
AI-first studio
Small generalist teams organize around business outcomes rather than technical domains. Game design documents become the primary input to AI production pipelines. Only 3 of 20 studios we interviewed reached this stage — all were built around AI from founding.
Based on 20 structured interviews across AAA, mobile, and indie studios · Wharton Generative AI Labs · 2026
What AI-first actually looks like
The most dramatic results came from studios that had been built around AI from day one. These organizations didn’t need to overcome layers of tacit knowledge or existing org structure. Their designs, playbooks, and procedures were captured in markdown files — plain text documents easy for both humans and AI to read — from the start and used by LLMs to complete tasks. The game design document — detailing the core pillars of the game itself — was referenced when asking an AI to build a gameplay loop (the core cycle of actions a player repeats) or generate art assets.
Small teams wore multiple hats, using AI to execute tasks across product management, data science, engineering, QA, and design. One AI-first studio ran with just five technical team members, all generalists. Another operated with roughly 22 people total. The result was ‘pipeline collapse’ — a dramatic compression in the number of steps and handoffs required to move from idea to prototype to playtest.
For game studios, iteration speed is central. Anything that speeds up the cycle from idea to design to prototype to playtest helps teams identify whether something works before investing heavily. AI-first studios could discover what made a game fun far faster than incumbents — at a fraction of the cost.
Productivity Gains
The numbers below show time compression — how much faster key tasks became with AI. A ‘vertical slice’ is a small but complete playable demo that represents the full game experience, traditionally one of the most time-intensive early milestones in development.
What AI cannot do
While studios found AI highly effective at generating code, graphics, queries, and documentation, they consistently reported that it could not substitute for activities centered on aligning people. Strategic planning, team alignment, and culture-building all require human-to-human collaboration — discussion, debate, and shared experience that cannot be short-circuited.
A strategy document produced by AI, without the team participating in the process, may contain the right words but fail to produce commitment. The process is what matters: it brings people together, surfaces concerns, and produces the shared understanding that makes a plan actually work. These human-driven coordination processes also generate the tacit-made-explicit documentation that AI systems then rely on to execute. In that sense, human judgment and AI execution remain deeply complementary.
Key Takeaways
Don’t stop at individuals.
Copy-and-paste AI generates anecdotal gains but leaves the organization structurally unchanged. The bigger opportunities require connecting workflows across teams.
Tacit knowledge is the real bottleneck.
Multi-person workflows run on unwritten rules. Before AI can automate reliably, that knowledge must be extracted and codified — often the hardest part of the work.
Bottom-up boundary-crossing outpaces top-down pilots.
The most durable early gains came from individuals who already understood the problem space and used AI to handle adjacent execution — without threatening existing roles or requiring org-wide coordination.
AI-first studios compress pipelines by 4–20x.
Studios built around AI from founding — with small generalist teams and documentation-driven workflows — achieved results incumbents have not yet approached.
Human coordination cannot be automated.
Setting strategy, aligning teams, and building culture require people to work through tradeoffs together. Skipping that process produces outputs without commitment.
AI has solved production, not distribution.
As content creation costs fall, finding an audience becomes the harder constraint. Existing platforms were not designed to handle AI-scale content volume.
