FEATURE
Model Minds
Menlo Alumni Offer Perspectives on Their Work in the AI Industry and the Future of Learning in a World of AI
“I think the single most valuable thing I learned at Menlo was the idea of claiming your education,” says Alexa Thomases ’19, a software engineer developing AI healthcare solutions at Microsoft.
That mindset, she recalls, was instilled by her Menlo English teacher Ms. Ramsey, who encouraged students to be active participants in their own learning. Today, as the meteoric rise of AI changes the landscape of knowledge acquisition and ownership, Alexa draws an analogy that Ms. Ramsey would likely appreciate—between teaching AI and teaching English. “Just as we learn to recognize the rhetorical devices authors use to persuade, and how literary and stylistic techniques reveal layers of meaning beyond the words on the page,” she says, “we also need to understand, even at a basic level, not just the inputs and outputs of AI as a user but the mechanisms that drive it.”
In a world where technological progress outpaces tradition, Menlo alumni like Alexa remind us that the future of education lies not in resisting change, but in shaping it—with curiosity, critical thinking, and the courage to lead.

ALEXA THOMASES CLASS OF ’19
EMBRACE CHANGE

KATE PARK
CLASS OF ’14
“What inspires me about AI is ultimately what it can do for people,” says Kate Park ’14, Director of Product at Scale AI and former Lead AI Product Manager on Tesla’s Autopilot computer vision team.
Her advice to the Menlo community? “Embrace change.” She recommends that both students and teachers use large language models (LLMs) to “enhance themselves,” suggesting that schools incorporate this technology into early education, just as they once introduced typing on a keyboard. “This is very necessary,” she explains, “because it is the future.”
For Kate, the concept of actively owning her education remains just as relevant now as it was at Menlo. On a recent trip to Korea, she enrolled in a two-week immersive Korean language course. Mornings were spent in traditional classroom instruction, and then she turned to ChatGPT each evening to dig deeper into her homework assignments. “I think that makes me a better student,” Kate says of enriching the value she received from the in-person experience.
This kind of learning points to a larger shift: students now have the tools to expedite their own education. “Teachers need to be ready for students to advance a bit faster, especially motivated students,” Kate observes. This reaffirms the vital role of a curriculum like Menlo’s, grounded in connection, critical reasoning, and character building.
As the founding engineer at Lighten AI, an LLM technology designed to accelerate clinical evidence generation, Niky Arora ’17 believes that “AI genuinely has the ability to improve quality of life and patient outcomes” in the healthcare space.
And she agrees that AI’s permanence in our lives demands early and thoughtful integration. “It’s going to be a tool that exists for the rest of everybody’s lives, so I think it’s important to introduce it early,” she says.

NIKY ARORA
CLASS OF ’17
PROCEED WITH CARE

MATT JOSS
CLASS OF ’16
Matt Joss ’16, a former software engineer developing a platform for K-12 education content development, emphasizes an ethics-based approach. “The market will easily teach you how to use AI, how to build AI, but it’s not going to teach you why AI is bad, why AI can become addictive, or how these companies are gathering data on you in order to influence your life,” he says.
Matt proposes that schools introduce a “digital humanities” curriculum that examines the ethical, philosophical, and societal implications of emerging technology. “You need to be able to teach things that students are not going to be able to learn outside of the classroom,” he advises.
This echoes Alexa’s belief that “a core component of education going forward needs to be equipping students and future leaders to understand not just how AI works but also why it does what it does.” Developing this framework will help students recognize how AI systems can be manipulated or misused, enabling them to critically assess associated risks and limitations.
Alexa admits to having extreme qualms about the possible pitfalls of AI but feels as though both things can be true: “AI can have the power for incredible positive social impact and the potential for devastating risks to all of humanity.” This dual reality reinforces the need for thoughtful integration of AI into society, beginning with how we prepare the next generation to engage with it.
What inspires me about AI is ultimately what it can do for people.”
—Kate Park ’14, Director of Product, Scale AI
PROGRESS WITH PURPOSE

MEGHA MALPANI
CLASS OF ’15
Megha Malpani ’15, a former Product Manager at Google Gemini, is inspired by the endless possibilities of this new technology. Passionate about the intersection of AI, climate, and social impact, she is currently pursuing an MBA at Stanford while building a stealth startup that leverages AI to minimize supply chain waste and excess emissions. Echoing Alexa, Megha adds, “It’s your responsibility to take ownership of your learning.”
It was this sense of agency, drive to engage with complex problems, and commitment to greater purposes that led Thomas Woodside ’19 to shift his career focus in 2022. As Co-Founder and Senior Policy Advisor at the Secure AI Project, Thomas combines technical expertise with a policy-oriented approach, working across disciplines to shape the responsible regulation of AI systems. Speaking to Menlo computer science students via Zoom this past spring, Thomas advised, “The best thing you can do is try to learn how to think about things in different ways and how to talk to people who disagree with you.”

THOMAS WOODSIDE
CLASS OF ’19
SCHOOL OF THOUGHT
Meanwhile, Matt underscores the significance of schools in this pursuit. “I think we need teachers and classrooms that try to bring out student voices that are unique, that don’t sound like they’ve been auto-generated,” he says. Much more than a place where you memorize and regurgitate facts, “you come to school to socialize, to learn how to work in teams, and to be introduced to new ideas outside your purview.” As AI continues to gain power and prevalence, Menlo alumni remind us that the essence of education isn’t just mastering new tools, but developing the curiosity to engage with them, the courage to question them, and the moral compass to guide them along the way. While the future of AI may seem limitless, Thomas is “hopeful people are going to be involved in deciding what our values are and what we actually want for the world.”
AI can have the power for incredible positive social impact and the potential for devastating risks to all of humanity.”
—Alexa Thomases ’19, Applied Scientist, Microsoft

Jane Zafran ’18 is a freelance writer and editor; she is Chief of Staff and Head Copy Editor of Stanford’s The Pegasus Review and Co-Founder of the Brown Journal of Medical Humanities.
© 2025 Menlo School. All rights reserved. Design by CDA.