EPOCH is an acronym for five categories of uniquely human capabilities:
These capabilities help identify tasks less automatable and more valuable alongside AI.
“It's not just about what AI can do, but what you — as a human — bring to the table that AI can't.”
Your assignment was not just a creative writing exercise.
It was a serious method of engaging with the future of technology—used by professionals in AI, policy, and national security.
Example: AI 2027 Scenario
A speculative yet research-informed vision of what might happen if AI capabilities continue accelerating.
This is the kind of creative yet grounded thinking that:
Plausible Trajectory (though fast-paced)
Deep Socio-Technical Implications
Fictional — But Purposeful
Your dystopia/protopia vignettes?
You're doing what think tanks and AI labs do.
And doing it before graduating.
What is SAC?
A fast-paced, role-based debate format where you explore both sides of a real-world ethical dilemma in software engineering — then drop roles and discuss openly.
How it works:
Why it matters:
🧠 You’re not here to win — you’re here to understand.
Sigal Ben-Porath
In 2023, the Writers Guild of America (WGA) went on strike. A major issue: the rise of generative AI in screenwriting. Writers feared studios would use AI to generate scripts, reducing creative roles and exploiting past work to train models. Studios claimed AI could boost efficiency and assist writers.
After months of protest, a deal was reached: AI cannot receive writing credits, and writers cannot be forced to edit AI-generated content. Writers can choose to use AI as a tool.
This case spotlights the creative labor vs. automation debate and the question: Should tech replace or augment human creativity?
From Bojack Horseman (2015), Netflix
Pro (Writers):
Con (Studios/Tech):
Why It Matters:
Future engineers will shape AI’s role in creative industries. How do we ensure AI supports, not replaces, the human voice?
AI models require massive computational power, which means intensive cooling — often using millions of gallons of water per data center.
Every time you chat with an AI, it indirectly consumes water. Microsoft reported a 34% jump in water use largely tied to AI growth.
In drought-prone areas, like parts of Arizona or Iowa, communities are worried about competition for water with tech giants. Yet companies say they’re investing in greener cooling and water replenishment programs.
This dilemma asks: Should AI development be limited to conserve water, or should tech lead the way in finding sustainable solutions?
Pro (Tech Industry):
Con (Environmental Advocates):
Why It Matters:
Sustainable AI is not just a technical problem. It’s about ethical design, environmental justice, and local responsibility.
Do different phases of AIML contribute equally to resource consumption??
Does AIML replace less efficient ways of doing the same thing?
AI tools like ChatGPT sparked a wave of academic cheating concerns in CS/SWE education. Some students used AI to write code or essays; some professors responded harshly — even using AI to detect AI, leading to false accusations.
This raises key dilemmas:
The controversy mirrors the industry shift: professional developers use AI. Shouldn’t students learn how to do so responsibly?
Pro (Students/Activists):
Con (Educators/Institutions):
Why It Matters:
This is not just about cheating. It’s about trust, fairness, and redefining learning in the age of intelligent tools.