An Insider's Guide to WAICY (Part 3): Strategies to Stand Out on the Global Stage
- jophy2467
- Oct 29, 2025
- 10 min read
You've chosen your track. You have an idea. Now comes the hard part: actually building something good enough to stand out among competitors.
Making WAICY finals isn't about having the flashiest idea or the most complex technology. It's about execution, storytelling, and demonstrating genuine understanding of AI's impact.
I learned this the hard way. My first project idea for WAICY was overly ambitious and technically impossible given my timeline. I pivoted to NutriGuide, which was a more focused, achievable project that played to my strengths. That decision made the difference between being one of thousands of submissions and becoming one of fewer than 40 finalists.
This part is about strategy: what separates finalist-worthy projects from the pile, how to develop your project efficiently, and how to present in a way that makes judges pay attention.


Summary
This article provides tactical strategies for standing out in WAICY, drawn from my experience as an AI Showcase finalist and AI-Generated Art 4th place winner. I break down the project development process: how to pick a problem that's neither too broad nor too narrow, the "triple constraint" of scope/quality/time, and why personal connection to your problem matters. I cover execution strategies, including the MVP (minimum viable product) approach, the 80/20 rule for feature prioritization, and specific debugging/testing tactics. I explain what judges actually look for beyond technical complexity and provide presentation strategies for the pitch, demo execution, and Q&A handling. Finally, I share the specific mistakes that sink otherwise strong projects and how to avoid them.
The Problem Selection Framework
Your project lives or dies on the problem you choose to solve. Pick wrong, and you'll struggle. Pick right, and everything else becomes easier.
The Goldilocks Principle: Not Too Broad, Not Too Narrow
Too broad:
"Solving climate change with AI"
"Improving education globally"
"Making healthcare more accessible"
These are noble goals, but they're impossible to tackle meaningfully in a 2-3 week competition project.
Too narrow:
"Detecting whether my specific houseplant needs water"
"Organizing my personal music library"
"Predicting what my dog wants to eat"
These are achievable but lack the impact judges want to see.
Just right:
"Using AI to detect coral disease in reef conservation efforts"
"Helping visually impaired students navigate school buildings with object detection"
"Reducing food waste by predicting ingredient spoilage in home kitchens"
Notice the pattern: Specific problem + clear stakeholder + achievable scope + real impact.
My Problem Selection: NutriGuide
The problem I chose:Â People lack nutritional knowledge about their meals, and unsustainable food consumption harms the environment.
Why this worked:
Specific enough - Focused on meal-level nutrition and sustainability, not "fixing all of healthcare"
Personal connection - I genuinely cared about nutrition and environmental impact
Stakeholder clarity - Everyday people trying to eat healthier
Achievable scope - Could build an MVP in 2-3 weeks with YOLO and OpenAI API
Demonstrable impact - Could show concrete examples of the app helping users
Why personal connection matters:
Judges can tell when you actually care. When I presented NutriGuide, I spoke passionately about food sustainability and nutritional education because I actually cared about those issues. That authenticity comes through in your presentation and Q&A responses.
Don't pick a problem just because it sounds impressive. Pick something you'd want to use yourself.
The Three-Question Test
Before committing to a problem, ask:
1. Can I build a working demo in my available time?
If you have 2 weeks and your project requires training a model from scratch on a massive dataset, the answer is no. Be realistic.
2. Can I explain why this problem matters in 30 seconds?
If you can't articulate the problem's importance quickly, judges won't get it either.
3. Does solving this problem require AI, or could you solve it another way?
WAICY is about AI. If your solution could work just as well without AI, you're in the wrong competition.
The Triple Constraint: Scope, Quality, Time
You can't have everything. Understand the tradeoffs:
High scope + high quality = lots of time
High scope + short time = low quality
High quality + short time = small scope
Most successful WAICY projects choose high quality + small scope because time is fixed (you have a submission deadline).
My Approach: The MVP Strategy
I didn't try to build the perfect nutrition app. I built the minimum viable product that demonstrated my core idea:
Must-have features (what I built):
Food ingredient detection via camera (YOLO)
Nutritional feedback from AI (OpenAI API)
Meal logging with NutriBot insights
Recipe generation based on detected ingredients
Local market finder for sustainable shopping
Nice-to-have features (what I cut):
Social sharing of meals
Nutrition tracking over time with graphs
Integration with fitness trackers
Meal planning calendar
Community recipe database
Result:Â A fully functional demo that showcased the AI capabilities without getting bogged down in feature bloat.
The 80/20 Rule
80% of your project's impact comes from 20% of features.
Identify that critical 20% and make it perfect. The rest is secondary.
For NutriGuide, the critical 20% was:
YOLO actually detecting food accurately
OpenAI API providing genuinely useful nutritional insights
The demo working smoothly without crashing
Everything else was nice but not essential to proving the concept.
Execution Strategies by Track
Different tracks require different execution approaches.
AI Showcase: Technical Execution Tips
1. Start with the hardest part first
Don't build the UI before you've proven the AI actually works. I started by getting YOLO to detect food ingredients accurately before building any of the app interface.
Why:Â If the core AI doesn't work, nothing else matters.
2. Use pre-trained models whenever possible
I used a pre-trained YOLO model rather than training one from scratch. This saved me literally weeks.
When to train your own:Â Only if pre-trained models don't exist for your specific use case.
3. Test incrementally
Don't wait until everything is built to test. Test each component as you build:
YOLO detection: Test with 20+ food images
API integration: Test with various prompts
Database: Test with edge cases
Full pipeline: Test end-to-end multiple times
4. Plan for demo failure
Have a backup demo video. I recorded a flawless run-through of NutriGuide before finals in case my live demo crashed.
Why:Â Technical demos fail. Always have a backup.
5. Document everything as you go
Don't wait until the end to write documentation. I kept a development log noting:
What I built each day
Problems I encountered and how I solved them
Design decisions and why I made them
This made writing the final submission documentation trivial.
AI-Generated Art: Creative Execution Tips
1. Concept before prompts
Don't just start typing prompts randomly. Develop your artistic concept first.
For "Joys of Family," I spent time thinking about what family meant to me before touching AI tools. This gave direction to my prompt engineering.
2. Iterate in batches
Generate 10-20 variations at once, identify the best 2-3, then refine those further.
Why:Â AI art requires lots of iteration. Batch processing is more efficient than one-by-one.
3. Study prompt engineering systematically
Learn what makes prompts effective:
Style descriptors (impressionist, photorealistic, surreal)
Composition terms (rule of thirds, centered, dynamic angle)
Lighting (golden hour, dramatic shadows, soft diffused)
Mood/emotion keywords
4. Keep a prompt journal
Document what worked and what didn't. I wish I'd done this more systematically, as I lost track of which prompts generated my best outputs.
AI LLM: Domain Application Tips
1. Pick a domain you actually know
Don't try to build a medical diagnosis LLM if you don't understand medicine. Pick something you have expertise in.
2. Test extensively with real users
Get actual people in your target domain to try your LLM application and give feedback.
3. Document failure cases
Show judges you understand limitations. What questions does your LLM struggle with? Why? This demonstrates critical thinking.
4. Refine prompts based on user feedback
LLM projects are all about iteration. Initial prompts are rarely optimal.
AI Video: Production Tips
1. Script before shooting/generating
Know exactly what your video will say and show before creating anything.
2. Use AI tools strategically, not everywhere
Don't make AI-generated video just to use AI. Use AI where it adds value (voice generation, effects, editing) but keep human creativity central.
3. Keep it short and punchy
2 minutes max. Judges watch dozens of videos. Respect their time.
What Judges Actually Look For
Beyond technical execution, judges evaluate projects on four key dimensions:
1. Originality
Not original:Â "I built a chatbot that answers questions about history."
Original:Â "I built an LLM that helps dyslexic students by automatically simplifying complex historical texts while maintaining factual accuracy."
The difference: specificity and novel application.
How to be original:
Don't copy past winners
Combine AI approaches in new ways
Apply AI to underexplored domains
Solve a problem you haven't seen solved before
2. Social Relevance
Weak relevance:Â "This could maybe help some people."
Strong relevance:Â "This addresses a documented problem affecting X people, supported by research showing Y impact."
I cited statistics in my NutriGuide presentation:
117 million adults have preventable chronic illnesses related to eating patterns
Food industry accounts for 26% of greenhouse gas emissions
50% of habitable land used for agriculture
Why this matters:Â Shows you understand the real-world context of your problem.
3. Ethical Thinking
Judges want to see that you've thought about:
Bias:Â Could your AI produce unfair outcomes?
Privacy:Â How do you handle user data?
Misuse:Â Could your technology be used harmfully?
Accessibility:Â Who might be excluded by your solution?
What I did for NutriGuide:
Addressed potential bias in food detection across different cuisines
Explained data privacy measures (local storage, user consent)
Acknowledged limitations (app requires smartphone, internet access)
Pro tip: Don't just list ethical concerns. Explain how you've addressed them in your design.
4. Technical Sophistication
This isn't about using the most complex AI possible. It's about:
Appropriate use of AI:Â Is your solution actually using AI well?
Integration quality:Â Do your components work together smoothly?
Understanding:Â Can you explain how your AI works?
I didn't build the most technically complex YOLO implementation, but I could clearly explain:
Why I chose YOLO over other object detection models
How YOLO processes images
What the confidence thresholds meant
Where the model struggled and why
Understanding > Complexity
Presentation Strategy: The 3-Minute Pitch
If you make finals, you present to judges. Here's how to nail it.
Structure Your Presentation
0:00-0:30 - The Hook
Start with the problem + a compelling statistic or story.
"Did you know that poor diet causes almost 50% of deaths from heart disease, diabetes, and strokes? Yet most people have no idea what they're actually eating."
0:30-1:30 - The Solution
Explain what you built and how it works. Focus on the AI.
"NutriGuide uses a pre-trained YOLO model for real-time food detection and OpenAI's API for personalized nutritional guidance..."
1:30-2:30 - The Demo
Show it working. This is crucial.
Live demo > pre-recorded demo > screenshots (in order of preference)
2:30-3:00 - The Impact
Explain why this matters and what comes next.
"NutriGuide empowers users to make informed decisions about nutrition and sustainability. Future plans include..."
Demo Execution Tips
Practice your demo 20+ times
I practiced my NutriGuide demo until I could do it in my sleep. This paid off when my internet connection wavered during finals as I knew exactly what to click without panicking.
Narrate as you demo
Don't just click around silently. Explain what you're doing: "Now I'm taking a photo of these ingredients... YOLO detects tomatoes, onions, and garlic... and NutriBot generates a recipe based on my dietary restrictions..."
Have a backup plan
If your demo crashes:
Stay calm
Switch to your pre-recorded backup video
Acknowledge the issue briefly: "Let me show you the pre-recorded demo while I troubleshoot..."
Judges understand tech fails. They care about how you handle it.
Q&A Strategy
This is where finalists often stumble. Judges ask tough questions.
Types of questions to expect:
1. Technical deep-dives
"Why did you choose YOLO over Faster R-CNN?"
"How did you handle class imbalance in your training data?"
"What's your model's accuracy on out-of-distribution samples?"
How to prepare:Â Know your technical choices inside and out. If you used a pre-trained model, understand why that model was trained the way it was.
2. Ethical considerations
"How do you prevent bias in nutritional recommendations across different cultures?"
"What happens if your model misidentifies an allergen?"
"Who has access to the user data?"
How to prepare:Â Think through potential harms before finals. Have concrete answers about how you've addressed them.
3. Future development
"How would you scale this?"
"What would you improve with more time?"
"How would you validate this actually changes user behavior?"
How to prepare:Â Have a roadmap ready. Show you've thought beyond the competition.
My Q&A experience:
I got asked: "How do you ensure YOLO doesn't misidentify ingredients, especially across different cuisines?"
My answer:Â "Great question. The pre-trained YOLO model I used was trained on diverse food datasets, but you're right that there's potential for bias. In testing, I found it struggled with some South Asian ingredients. To address this, I implemented confidence thresholds, so if YOLO is less than 80% confident, it prompts the user to manually verify the ingredient. Future versions would involve fine-tuning on more diverse food datasets."
Why this worked:
Acknowledged the limitation honestly
Explained my current mitigation strategy
Showed understanding of how to improve
Don't do this:
Pretend you don't understand the question
Get defensive
Make up technical details you're not sure about
Do this:
Be honest about what you know and don't know
Explain your reasoning clearly
Show you've thought critically about your work
Common Mistakes That Sink Projects
Mistake 1: Overengineering
The trap:Â "I'll add 15 features to impress judges."
The reality:Â Judges prefer 3 features that work perfectly over 15 that barely function.
Fix:Â Focus on core functionality. Make it bulletproof.
Mistake 2: Ignoring the "Why"
The trap:Â Building something technically impressive without explaining why it matters.
The reality:Â Judges evaluate impact, not just technical skill.
Fix:Â Lead with the problem, not the solution. Make them care about why your project exists.
Mistake 3: Poor Documentation
The trap:Â "The code/project speaks for itself."
The reality:Â Judges can't give you points for work they can't see or understand.
Fix:Â Document everything. Explain your process, decisions, and results clearly.
Mistake 4: Weak Demo
The trap:Â Assuming the submission materials are enough.
The reality:Â For finalists, the demo makes or breaks your presentation.
Fix:Â Practice your demo obsessively. Make it smooth, clear, and engaging.
Mistake 5: Ignoring Ethics
The trap:Â "My project doesn't raise ethical issues."
The reality:Â Every AI project has ethical considerations.
Fix:Â Proactively address bias, privacy, safety, and accessibility in your submission and presentation.
Mistake 6: Copying Past Winners
The trap:Â "I'll build something similar to last year's gold winner."
The reality:Â Judges have seen those ideas. They want novelty.
Fix:Â Use past winners as inspiration for quality, not as templates to copy.
The Finalist Mindset
Making finals isn't about being the smartest person in the room. It's about:
1. Execution over ideas
Ideas are cheap. Implementation is hard. Judges reward projects that actually work.
2. Clarity over complexity
A simple, well-explained project beats a complex, poorly-explained one.
3. Impact over impressiveness
Judges care more about real-world relevance than technical flexing.
4. Authenticity over polish
They'd rather see genuine passion for a problem than a perfectly polished project you don't care about.
What's Next
In Part 4, I'll share the lessons I learned from competing: what this experience taught me beyond just AI, honest reflections on whether WAICY was worth the effort, advice I'd give my past self, and guidance for students considering WAICY in future years. This is the big-picture takeaway and what actually matters when the competition is over.

About the Author: I'm Jophy Lin, a high school senior and researcher. I blog about a variety of topics, such as STEM research, competitions, shows, and my experiences in the scientific community. If you’re interested in research tips, competition insights, drama reviews, personal reflections on STEM opportunities, and other related topics, subscribe to my newsletter to stay updated!