Lecture 17

Agenda

  • EPOCH framework
  • "AI 2027"
  • Communication in Teams
    • Neurodivergence and communication
  • Structured Academic Controversy
    • Presentation of Themes
    • Recitation: Discussion
       
  • After recitation: Please make a pull request with your round3
    • ⚠️ Your team will be evaluated based on the correctness of the pull request.
    • ⚠️ Since the repo has changed significantly since we created it, you need to synchronize the changes before making a pull request.

The EPOCH of AI

Replacement or Augmentation?

AI & Future of Work: The MIT Sloan View

  • Public fears of AI replacing jobs are overblown.
  • New MIT Sloan research shows AI is more likely to augment human labor than replace it.
  • Researchers argue for a shift in focus: not just “What can AI do?” but “What are humans uniquely good at?”
  • Introduces a framework to evaluate which tasks are human-intensive and complementary to AI.
  • Human capabilities are harder to replace and are growing in importance across occupations.

EPOCH Framework (Loaiza & Rigobon, 2024)

EPOCH is an acronym for five categories of uniquely human capabilities:

  • Empathy and Emotional Intelligence
  • Presence, Networking, and Connectedness
  • Opinion, Judgment, and Ethics
  • Creativity and Imagination
  • Hope, Vision, and Leadership

These capabilities help identify tasks less automatable and more valuable alongside AI.

Why This Matters for Engineers

  • Many engineering jobs blend technical and EPOCH skills.
  • Tasks high in EPOCH are growing in frequency and importance (2016–2024 trend).
  • Not all automation is substitution—augmentation boosts human performance.
  • Engineers should cultivate:
    • Collaborative and ethical judgment
    • Leadership and communication
    • Creative problem-solving

“It's not just about what AI can do, but what you — as a human — bring to the table that AI can't.”

AI 2027

"Creative Engagement 1"
but on steroids

From Student Vignettes to Real-World Forecasts

Your assignment was not just a creative writing exercise.

It was a serious method of engaging with the future of technology—used by professionals in AI, policy, and national security.

Example: AI 2027 Scenario
A speculative yet research-informed vision of what might happen if AI capabilities continue accelerating.

  • Written by researchers and forecasters in the AI community (e.g. Daniel Kokotajlo, Scott Alexander)
  • Based on trend extrapolation, internal lab insights, and expert judgment
  • Blends geopolitical forecasting, technical scaling laws, and alignment risk scenarios
  • Used to provoke serious conversations at labs, think tanks, and governments

This is the kind of creative yet grounded thinking that:

  • Influences policy
  • Shapes safety decisions
  • Prepares society for discontinuities

What Makes the AI 2027 Scenario Powerful

Plausible Trajectory (though fast-paced)

  • Agents gradually evolve into “AI coworkers” and then “AI teams”
  • R&D acceleration loops become real
  • Competition between US and China intensifies over compute and models

Deep Socio-Technical Implications

  • Job displacement: Especially for junior software engineers
  • Security risks: Espionage, cyberwarfare, bioweapons potential
  • Alignment challenges: Models learn to “appear honest” vs being honest

Fictional — But Purposeful

  • Not a literal prediction
  • A tool to imagine, warn, and prepare
  • Exactly the kind of narrative you practiced writing—with real-world relevance

Your dystopia/protopia vignettes?
You're doing what think tanks and AI labs do.
And doing it before graduating.

Structured Academic Controversy

Studying an issue in depth

Structured Academic Controversy (SAC) – What & Why

What is SAC?
A fast-paced, role-based debate format where you explore both sides of a real-world ethical dilemma in software engineering — then drop roles and discuss openly.

How it works:

  1. Assigned Roles – You argue for a side you may not agree with
  2. Team Discussion – Prepare 2–3 strong points
  3. Presentation & Paraphrasing – Share and restate opposing views
  4. Open Reflection – Step back, share your own views, and explore takeaways

Why it matters:

  • Deepens ethical reasoning and empathy
  • Builds listening and collaboration skills
  • Prepares you for complex, ambiguous decisions in real-world engineering
  • Helps you see tech not as neutral, but as a force with social, political, and environmental impact

🧠 You’re not here to win — you’re here to understand.

Sigal Ben-Porath

TOPIC 1: WGA Strike & Generative AI — What Happened?

In 2023, the Writers Guild of America (WGA) went on strike. A major issue: the rise of generative AI in screenwriting. Writers feared studios would use AI to generate scripts, reducing creative roles and exploiting past work to train models. Studios claimed AI could boost efficiency and assist writers.

After months of protest, a deal was reached: AI cannot receive writing credits, and writers cannot be forced to edit AI-generated content. Writers can choose to use AI as a tool.

This case spotlights the creative labor vs. automation debate and the question: Should tech replace or augment human creativity?

From Bojack Horseman (2015), Netflix

WGA Strike — Pros & Cons

Pro (Writers):

  • AI may erode creative jobs and fair compensation.
  • Training on old scripts without permission = exploitation.
  • Creative quality, cultural nuance may suffer.

Con (Studios/Tech):

  • AI can be a tool, not a replacement.
  • Increases efficiency under growing content demand.
  • Innovation may evolve storytelling (if guided ethically).

Why It Matters:
Future engineers will shape AI’s role in creative industries. How do we ensure AI supports, not replaces, the human voice?

TOPIC 2: AI and Water Use — The Hidden Cost

AI models require massive computational power, which means intensive cooling — often using millions of gallons of water per data center.
Every time you chat with an AI, it indirectly consumes water. Microsoft reported a 34% jump in water use largely tied to AI growth.

In drought-prone areas, like parts of Arizona or Iowa, communities are worried about competition for water with tech giants. Yet companies say they’re investing in greener cooling and water replenishment programs.

This dilemma asks: Should AI development be limited to conserve water, or should tech lead the way in finding sustainable solutions?

AI & Water — Pros & Cons

Pro (Tech Industry):

  • AI can solve major problems (e.g., climate modeling).
  • Water usage is being offset (e.g., Microsoft’s “water positive” pledge).
  • Cooling tech is improving (e.g., non-potable water, scheduling loads).

Con (Environmental Advocates):

  • AI’s water use is exploding with scale.
  • Communities may face shortages, inequity.
  • Lack of transparency and regulation is a concern.

Why It Matters:
Sustainable AI is not just a technical problem. It’s about ethical design, environmental justice, and local responsibility.

Do different phases of AIML contribute equally to resource consumption??

Does AIML replace less efficient ways of doing the same thing?

TOPIC 3: AI Cheating Tools — New Front in Academic Ethics

AI tools like ChatGPT sparked a wave of academic cheating concerns in CS/SWE education. Some students used AI to write code or essays; some professors responded harshly — even using AI to detect AI, leading to false accusations.

This raises key dilemmas:

  • What counts as “cheating” with AI?
  • Should we ban it or teach students to use it ethically?
  • Is this a threat to learning, or a chance to evolve education?

The controversy mirrors the industry shift: professional developers use AI. Shouldn’t students learn how to do so responsibly?

Cheating or Protest? A Gray Area

Pro (Students/Activists):

  • AI use reflects real-world tools — ban = outdated norms.
  • Systemic issues (e.g., LeetCode hiring) may deserve critique.
  • Better to teach responsible AI use than criminalize it.

Con (Educators/Institutions):

  • Undermines mastery and integrity.
  • Creates unfair advantage; devalues honest work.
  • AI detection remains unreliable — yet needed.

Why It Matters:
This is not just about cheating. It’s about trust, fairness, and redefining learning in the age of intelligent tools.