Vantage: Google’s New AI Experiment That Scores Your ‘Future-Ready’ Skills

Vantage: Google’s New AI Experiment That Scores Your ‘Future-Ready’ Skills

1 0 0

I’ve been watching the whole “future-ready skills” conversation for a while now, and honestly, most of it feels like corporate buzzword bingo. But Google Research just dropped something that actually got my attention.

They’re calling it Vantage. It’s a research experiment—now available on Google Labs for sign-up—that uses generative AI to assess things like critical thinking, collaboration, and creative thinking. And the kicker? They ran a study with New York University and found the AI’s scoring was on par with human experts.

That’s higher than I expected. Let’s dig into what this actually is.

The problem with measuring soft skills

Here’s the thing: we’ve all been told that skills like critical thinking and teamwork are essential. The OECD Learning Compass 2030 and the WEF’s Future of Jobs report both hammer this home. But measuring them? That’s a nightmare.

Multiple-choice tests don’t cut it. They’re too rigid, too artificial. You can’t capture how someone handles a disagreement or builds on someone else’s idea with a bubble sheet. Real human role-play works, but it’s expensive, hard to standardize, and nearly impossible to scale. How do you fairly grade 500 students on conflict resolution when some groups never even argue?

Vantage tries to solve this by putting learners into simulated conversations with AI avatars. The idea is to create a controlled but realistic environment where students can show their stuff.

How Vantage works

The setup is pretty clever. You’re placed in a multi-party conversation with AI avatars that are working together on a task. Maybe you’re preparing for a debate, or pitching a creative idea. The scenarios are open-ended, which is key—no scripted paths.

Behind the scenes, there’s an “Executive LLM” that uses a rubric to steer the conversation. It dynamically introduces challenges—like having an avatar push back on your idea or introducing a conflict—to give you opportunities to demonstrate your skills. Think of it as an adaptive test engine that doesn’t just ask questions but shapes the whole dialogue to assess you.

By the end, the system has gathered enough data to score you on things like critical thinking, collaboration, and creative thinking. And according to the study, those scores match what human experts would give.

Is this actually useful?

I’m cautiously optimistic. The approach has been tried before—simulated environments for soft skills assessment aren’t new—but the use of generative AI to adapt in real-time is a genuine step forward. The fact that it’s built with pedagogy experts from NYU gives it some credibility, too.

That said, I have questions. How well does it handle cultural differences in communication styles? Can it really distinguish between a student who’s genuinely struggling versus one who’s just nervous about talking to AI? And let’s be real—no simulation fully replaces real human interaction.

But as a scalable tool for practice and formative assessment? This could be a game-changer for high school and college settings where teachers are already stretched thin. It’s not a replacement for human judgment, but it might be a damn good supplement.

Vantage is currently in English and available for sign-up on Google Labs. I’m curious to see how it evolves. If you try it, let me know what you think.

Full disclosure: I have no affiliation with Google or NYU. I just think this is worth paying attention to.

Comments (0)

Be the first to comment!