Beyond Algorithms: Engineering Judgment in the Age of AI

When Justin “Gus” Hurwitz walks into a classroom, he’s not there to teach rules. He’s there to teach engineers to see the fracture points between technical judgment, legal obligation, and moral responsibility.
In his Technology, Ethics, & the Legal Landscape course, offered within Penn Engineering’s MSE-AI and MCIT programs, a conversation that begins with a question about self-driving cars turns into a debate about the value of human life. Should a company disable a feature if it might cause harm, even if that harm is statistically rare? How do you weigh what’s reasonable against what’s right? Whatever it does, how will the company explain its decision to a judge, a regulator, or an angry member of Congress?
For Gus, those moments of discomfort are the point. His course is less about compliance and more about judgment – training engineers to stop, think, and recognize that every design decision has human consequences. “Engineering is never just technical,” he tells his students. “It’s about the systems we build, and the society we build around them.”
This class captures the “AI + X” philosophy that defines a Penn Engineering Master’s degree: artificial-intelligence engineers must understand not just algorithms, but accountability.
The Teaching Construct: Engineering Judgment in Real Time

Gus’s classroom runs like a design lab for decision-making. He drops students into messy, real-world scenarios – the Ford Pinto recall, Tesla’s $243 million verdict, the “Fight Club” cost-benefit scene – and forces them to test every assumption they have about risk, innovation, and responsibility.
He starts with cases that feel “obviously right.” Within minutes, students realize that every choice shifts harm from one group to another, or from safety to affordability, or from law to market. “Every choice has trade-offs,” he explains. “No matter what set of choices you make, they’re the wrong ones for someone.”
As Gus emphasises to his students, “The key is recognizing these trade-offs so that you can explain why you made the choices that you did.” His goal: to cultivate engineering judgment – the ability to see that every line of code embeds various social choices in technical ones.
Law as a Design Constraint, Not a Speed Bump
Most engineers treat law as an obstacle. Gus reframes it as a design parameter – as fundamental as cost, performance, or usability. “Everything engineers do is regulated,” he reminds them. “And almost all regulation exists because something engineered went wrong.”
He teaches law the way engineers understand systems: as the record of past failures. Negligence and product-liability cases become debugging guides. Consumer-protection standards become user-experience lessons. When students internalize that, they stop asking, “What can we get away with?” and start asking, “What should we design for?”
For students designing complex, real-world systems — whether in MSE-AI or MCIT — that perspective is essential. The course gives engineers a framework to anticipate bias, privacy risk, and unintended harm before their technologies ever reach production.
The Method: Tension as Pedagogy
Every class balances on a knife-edge between analysis and argument. One moment students are debating whether “reasonable” means economically efficient or ethically defensible; the next, they’re calculating risk using the Hand Formula while questioning the morality of assigning a dollar value to human life.
By the end of a session, they’ve practiced what real engineering leaders do: quantify risk without losing sight of people; argue both sides of a decision; and recognize when cost-benefit crosses into moral failure. The exercise isn’t about memorizing doctrine – it’s about building muscle memory for ethical engineering leadership.
The Man Behind the Method
Before he was a law professor, Gus was an engineer – part of a Los Alamos team that briefly held a world record for data-transfer speeds. Early in his academic career, he taught engineering to law students, helping future lawyers understand the technologies shaping their cases. Now, he’s reversed the equation: teaching law to engineers. That shift defines his mission – to make legal reasoning and ethical awareness as fundamental to design as code itself.
His passion for bridging those worlds is evident in the way he teaches: inviting argument, not avoiding it; turning law into a tool for innovation, not a limit on it. His class sold out within days of launch, and he doubled capacity to meet demand.
“Engineering isn’t just about building technology – it’s about understanding how that technology impacts society,” says Boon Thau Loo, Senior Associate Dean for Education and Global Initiatives. “Gus’s course brings that to life. It’s the first collaboration between our law and engineering schools in the online space, and it sold out within days. That tells you how deeply students crave this kind of learning.”
The Institutional Bridge: From Classroom to Culture
Christopher Yoo, the Imasogie Professor in Law and Technology; Professor of Communication; Professor of Computer and Information Science and founding director of Penn’s Center for Technology, Innovation & Competition (CTIC), describes Gus as “one of my greatest success stories.”
Yoo has spent years building what he calls a “bilingual generation” – people fluent in both code and consequence. “Engineers used to clear out a room when policy showed up,” he says. “Now they know they can’t build the future if they ignore it.” Gus, he adds, “embodies that shift.”
Gus’s role also connects directly to Penn’s participation in the Public Interest Technology University Network (PIT-UN), founded by the MacArthur and Ford Foundations to bring more technically literate voices into policymaking. Through that work, Gus helps engineers see law not as a constraint, but as a partner in building a more accountable tech future. For MSE-AI and MCIT students, it’s proof that responsible AI isn’t just an elective topic – it’s the new professional baseline.
Why It Matters
What Gus is building isn’t just a course – it’s a new kind of literacy. When students practice trade-offs before they’re executives, when they learn how negligence and design failure rhyme, when they recognize law as a form of collective memory, they stop treating ethics as an afterthought.
“We’ve always taught our students to solve hard problems. Gus teaches engineers to ask harder questions,” says Chris Callison-Burch, Professor and MSE-AI Program Director at Penn Engineering. “And that’s how you build technology the world can trust.”
It’s the simplest possible summary of Gus’s impact – and the most urgent. In his classroom, curiosity isn’t just encouraged; it’s required. Because in the end, building trust may be the most technical work of all.
About the Instructor

Justin “Gus” Hurwitz
Senior Fellow and Academic Director, Center for Technology, Innovation & Competition (CTIC)
University of Pennsylvania Carey Law School
Before joining Penn, Gus worked as a computer scientist focused on high-performance networking and later at the Department of Justice’s Antitrust Division. His research spans telecommunications, technology law, and regulation — with a focus on how legal frameworks shape innovation. He teaches Technology, Ethics, & the Legal Landscape for Penn Engineering Online as part of the MSE-AI and MCIT programs.