
Inside the Algorithm: How We Score 2,305 Niches Across 11 Platforms in Real-Time
Published February 18, 2026 — MicroNicheBrowser Research
Every week, someone asks us some version of the same question: "How do you actually know if a niche is good?"
It is a fair question. The internet is full of niche idea lists built on vibes, affiliate motives, or outdated research. Someone writes "27 Best SaaS Niches for 2024," gets 40,000 views, never updates it, and by the time you read it, three of those niches are saturated and two never had real demand to begin with.
We built MicroNicheBrowser because we were tired of that. We wanted signal, not noise. We wanted a system that could look at a niche — any niche — and give an honest, data-grounded answer about whether it was worth pursuing. Not "this sounds cool" but "here is what 11 different data sources tell us, here is how they combine into a score, and here is why this niche scores 71 while that one scores 34."
This article is the full transparency report on that system. We are going to show you exactly how it works: the 11 platforms we monitor, the 5 dimensions we score against, the mathematics of the scoring engine, and what the distribution of 2,305 scored niches actually looks like. By the end, you will understand why we set the validation threshold at 65, why only 141 niches (6.1%) have cleared it, and what separates a niche that scores 78 from one that scores 52.
No hand-waving. No black boxes. This is the full picture.
Why Methodology Matters More Than the List
Before we get into the mechanics, it is worth explaining why we think methodology transparency is the whole game.
There are a handful of competitors in the niche research space. Most of them give you a list. IdeaBrowser.com — built by Greg Isenberg, DR 39, reasonable organic traffic — offers a curated catalog of niche ideas organized by tags like "B2B" or "AI tools." It is genuinely useful as a starting point. But the tags are manually assigned, the scoring is opaque, and the list is essentially static. There is no daemon running overnight to tell you whether the "AI scheduling tools for dentists" niche has gotten more or less competitive since the article was written.
Ideagrape focuses almost entirely on willingness-to-pay signals. That is one important dimension, but it misses the broader picture. A niche can have high WTP and still fail because the tooling required to serve it is prohibitively complex, or because the market timing is wrong, or because there is already one dominant player who owns 80% of the search real estate.
We are not criticizing those products. We are explaining our design choice: if you are going to score niches at scale, you need a multi-dimensional model with real-time data feeds, and you need to be honest about the math.
Our scoring engine runs 24 hours a day, 7 days a week. It processes approximately 40 niches per hour, spending roughly 90 seconds on each one. That 90 seconds involves pulling data from 11 different platforms, running it through 5 scoring dimensions with known weights and mathematical curves, and writing the result to a database that powers the MNB interface. As of this writing, we have scored 2,305 niches. We have collected 16,907 evidence data points across those niches. And we have run a total of 78 research skills covering everything from community signal analysis to financial modeling.
Here is what all of that looks like under the hood.
The 11 Platforms: Where We Get the Data
Our scoring system is only as good as its inputs. We deliberately chose a diverse set of platforms that give us different kinds of signal: some tell us about consumer interest, some tell us about creator monetization, some tell us about search demand, and some tell us about commercial intent. No single platform tells the whole story.
Social Platforms (via ScrapeCreators API)
We pull data from six social platforms using ScrapeCreators, a purpose-built API for social media data at scale:
YouTube is our primary signal for niche viability. We look at channel counts, subscriber growth rates, average view counts per video, comment engagement, and monetization indicators. A niche with 50 channels averaging 200,000 views per video and active comment threads signals robust consumer interest. A niche with 3 channels averaging 8,000 views signals a market that either does not exist or has not been discovered yet — both interpretations matter.
Reddit gives us problem-intensity signal. We are not just counting subreddit members; we are looking at post frequency, upvote-to-comment ratios, the presence of "I wish someone would build X" threads, and the emotional tenor of complaints. Reddit is where people articulate their pain unfiltered. A subreddit where people regularly vent about the inadequacy of existing tools is a niche where there is genuine demand for a better solution.
TikTok has become one of our most valuable early-trend indicators. Because the algorithm surfaces content based on engagement rather than following, niches that are gaining traction show up in TikTok view counts before they register in Google search volume. We look at view counts, hashtag velocity, and creator account growth in the niche.
Instagram gives us a different read on the same consumer-facing trends, with particular value for lifestyle-adjacent niches (health, finance, productivity) where Instagram's visual format attracts significant creator activity.
Pinterest is a strong signal for purchase intent in visually-driven niches. Pinterest users are often explicitly in a discovery-and-buying mindset. High Pinterest engagement in a niche correlates with higher commercial intent than the same level of engagement on, say, Twitter.
Twitter/X, Facebook, LinkedIn, and Threads round out the social picture. LinkedIn is particularly valuable for B2B niches — we look at LinkedIn group sizes, post engagement in niche-adjacent professional communities, and job posting trends that might signal industry growth. Facebook groups give us another community size signal with different demographic skew than Reddit.
Search and Trend Platforms
Google Trends tells us the direction of demand over time. We are not just looking at whether a term has search volume; we are looking at whether that volume is growing, plateauing, or declining. A niche with moderate current volume but a 40% growth trend over 18 months scores better than a niche with higher current volume that peaked two years ago. Timing is everything, and Google Trends is our timing instrument.
DataForSEO handles keyword research at scale. We pull search volume, keyword difficulty, cost-per-click, and competition data for the primary and long-tail keywords associated with each niche. CPC is particularly valuable because it reveals what advertisers are willing to pay — which is a proxy for the lifetime value of a customer in that niche. A niche with CPC of $8 attracts different economics than one with CPC of $0.40.
We also use Google Search directly (via SerpAPI) for three specific data pulls per niche: top-ranking domain authority analysis, ad presence scanning, and featured snippet detection. Who currently ranks for the niche's primary keywords, and how hard are they to displace?
The 5 Scoring Dimensions
Raw data is not a score. We need a model that translates data across 11 platforms into a single number that is honest, stable, and actionable. Our model has 5 dimensions, each with a defined weight and mathematical formulation.
Dimension 1: Opportunity Score (20% weight, base 4)
The Opportunity score asks: is there a real market here, and is it underserved?
We look at aggregate search volume, social discussion volume, and the gap between expressed demand and available solutions. A niche with high Reddit complaint volume but few dedicated tools scores high on opportunity. A niche with moderate search volume but where the top-3 results are Wikipedia entries and forum threads (not purpose-built products) scores high on opportunity.
The base score of 4 means a niche with average signals starts at 4 out of 10. Signals above baseline push the score up; signals below push it down. The weight of 20% in the final composite reflects our view that opportunity is necessary but not sufficient — you can have massive opportunity in a space that is too difficult to execute in.
Dimension 2: Problem Score (10% weight, base 4)
The Problem score asks: is the underlying pain real, acute, and recurring?
This is our Reddit-heavy dimension. We count complaint posts, analyze sentiment intensity, and look for what we call "desperation signals" — threads where users describe having tried multiple solutions and still being unsatisfied. We also look for willingness-to-pay signals in social discussions: threads where people mention paying for a solution, or explicitly ask "does anyone know a tool that does X?"
The 10% weight reflects a deliberate choice. Problem intensity is important for validating that a market exists, but it does not tell you whether you can win. A niche can have intense, real problems and still be a graveyard for founders if the structural economics are wrong. We capture problem signal without over-weighting it.
Dimension 3: Feasibility Score (30% weight, base 5)
Feasibility is our heaviest dimension, and deliberately so.
It asks: given the technical complexity, competitive landscape, and resource requirements, can a small team realistically build and sell something in this space?
We pull data from multiple inputs: keyword difficulty (from DataForSEO) tells us how hard it is to acquire organic traffic. Domain authority of existing players tells us the competitive ceiling. Our internal knowledge base of niche categories flags domains with high regulatory overhead (healthcare, finance, legal) that add execution complexity. API availability — can you actually build the product, or does it require scraping, licensing deals, or proprietary data?
The base of 5 reflects a deliberate calibration choice. We want niches to earn high feasibility scores, not have them by default. A score of 5 means "average feasibility, no obvious blockers." Most niches sit below 5 because most spaces have at least some meaningful competitive moat or technical complexity.
This is the dimension that kills the most niches. Ideas that sound exciting on paper often score 3-4 on feasibility because the keyword difficulty is 80+ or because there is a well-funded incumbent with 5 years of head start.
Dimension 4: Timing Score (20% weight, base 5)
Timing asks: is now the right moment for this niche?
We pull Google Trends data (18-month trajectory), TikTok hashtag velocity, and our internal evidence collection rate (are we finding more or fewer relevant posts about this niche over time?). We also look at recent funding news and product launches in adjacent spaces — if three well-funded startups just entered a space, that can be both a validation signal and a warning.
The scoring curve here is particularly important. A niche in sharp decline scores near 1 regardless of its other merits. A niche at the beginning of a clear growth curve — rising Trends data, growing subreddit, increasing TikTok hashtag use — scores high. The inflection point is the most valuable signal: niches whose Trends data shows a "just breaking out" pattern get scored aggressively high because the window of opportunity is often narrow and competition is still low.
Dimension 5: GTM Score (20% weight, base 3)
GTM (Go-to-Market) asks: are the customer acquisition channels clear, affordable, and accessible?
This is where keyword CPC matters most. High CPC means either that paid acquisition is expensive (which raises your CAC and changes your unit economics) or that the niche is monetizable enough that advertisers compete for it (which validates commercial potential). We track both interpretations.
We also score based on community accessibility: a niche with an active subreddit, a handful of large Facebook groups, and a cluster of YouTube channels gives you built-in distribution channels for content marketing, seeding, and community presence. A niche that exists only in enterprise settings — accessible only through trade shows and direct sales — has a higher GTM bar for a solo founder.
The base of 3 is the lowest of any dimension. Most niches do not have obvious, low-friction customer acquisition paths. This is honest: GTM is hard, and we do not want to paper over that reality.
The Math: Why Continuous Log Curves Beat Step Functions
Our scoring engine has gone through three major versions. Version 1 was a weighted average of raw platform metrics — deeply flawed because raw numbers are not comparable across platforms (a million TikTok views means something very different from a million LinkedIn impressions). Version 2 introduced step functions: if search volume > 10,000, add 2 points; if search volume > 50,000, add 3 points; etc.
Step functions were better, but they created cliff edges. A niche with 49,999 monthly searches scored meaningfully worse than one with 50,001 monthly searches, which is silly. Real signal does not work that way. Markets do not have sharp edges.
Version 3 — our current engine — uses continuous log curves throughout.
The core insight is that signal strength follows a logarithmic relationship with raw metrics. The difference between 100 monthly searches and 1,000 monthly searches is enormous (the niche exists vs. the niche is detectable). The difference between 100,000 and 110,000 monthly searches is negligible. A log curve captures this correctly.
For each metric, we normalize the log of the raw value against an expected distribution for that metric type, producing a value between 0 and 10. These normalized sub-scores then feed into the dimension scores with defined weights. The dimension scores combine into the overall score using the weights described above.
The composite formula (simplified):
overall_score = (
opportunity_score * 0.20 +
problem_score * 0.10 +
feasibility_score * 0.30 +
timing_score * 0.20 +
gtm_score * 0.20
)
Each dimension score sits between 0 and 100. The overall score therefore also sits between 0 and 100.
The log curves have a compounding effect: genuinely exceptional signal across multiple platforms produces scores that feel distinctly different from merely-good signal. This is intentional. We want the scoring distribution to be honest, not flattering.
The Distribution: What 2,305 Scored Niches Actually Look Like
Here is where things get real.
We have scored 2,305 niches as of this writing. Here is the distribution:
| Score Range | Niches | % of Total | Interpretation | |-------------|--------|------------|----------------| | 80-100 | 8 | 0.3% | Exceptional — rare, high-conviction opportunities | | 70-79 | 44 | 1.9% | Strong — validated, worth serious research | | 65-69 | 89 | 3.9% | Validated — passes our threshold | | 55-64 | 387 | 16.8% | Promising — needs more development or timing | | 40-54 | 931 | 40.4% | Mixed — real signal in some dimensions, weak in others | | 20-39 | 756 | 32.8% | Weak — insufficient demand or feasibility | | 0-19 | 90 | 3.9% | No signal — speculative or dead |
141 niches (6.1%) have cleared our validation threshold of 65. Only 52 (2.3%) score 70 or above.
This distribution is what honest scoring looks like. If we were producing a polished list of "Top 100 SaaS Niches" and reverse-engineering our scores to make it look good, you would see a very different distribution — perhaps 40% of niches scoring above 65. The fact that only 6.1% make it is not a bug in our methodology. It is evidence that the methodology has integrity.
For context: when we ran version 2 of the scoring engine (step functions), roughly 18% of niches scored above our then-threshold of 70. The shift to log curves in version 3 corrected for systematic grade inflation. Scores became harder to achieve. Fewer niches passed. The ones that did pass became genuinely meaningful.
High vs. Low Scorers: What the Data Shows
To make the scoring concrete, let us walk through what separates a high-scorer from a low-scorer.
A High-Scoring Niche (Score: 74)
One of our strongest-scoring validated niches is in the professional services automation space — specifically, tools that help independent accountants manage client communication and document collection workflows. Here is what its data looks like:
- YouTube: 340+ channels in the adjacent space, average 45,000 views per video, comment sections full of "what tool do you use for this?" questions
- Reddit: Active communities (r/taxpro, r/accounting) with recurring complaint threads about manual document collection taking 3-4 hours per client onboarding
- Google Trends: 24-month growth trajectory, accelerating in the last 6 months as AI-adjacent tooling has flooded adjacent markets but not this specific workflow
- DataForSEO: Primary keyword at 18,000 monthly searches, KD 42, CPC $7.20 — high enough to signal commercial intent, low enough to be winnable with content
- GTM: Multiple established communities for seeding, influencer channels with monetized audiences, clear path to content marketing
Scoring breakdown: Opportunity 68 (large, underserved market), Problem 71 (intense, documented pain), Feasibility 79 (achievable with a 2-person team, no regulatory moat), Timing 74 (accelerating growth curve), GTM 72 (clear channels). Overall: 74.
A Low-Scoring Niche (Score: 31)
An example from the other end: a niche around AI tools for luxury yacht maintenance. Here is the data:
- YouTube: 12 channels, average 4,200 views, minimal comment engagement
- Reddit: No dedicated subreddit. Mentions scattered across luxury lifestyle communities with low pain intensity ("it would be nice if X existed" rather than "I need X and can't find it")
- Google Trends: Flat to declining 18-month trajectory
- DataForSEO: Primary keyword at 1,100 monthly searches, KD 28, CPC $1.80 — low CPC signals low commercial intent despite low difficulty
- GTM: No accessible communities, no content marketing path, likely requires direct enterprise sales
Scoring breakdown: Opportunity 28 (small market, limited organic discovery), Problem 22 (low pain intensity), Feasibility 44 (low competition but also low demand), Timing 31 (flat trajectory), GTM 19 (no clear channel). Overall: 31.
The luxury yacht niche might appeal to someone's imagination. It sounds exclusive and high-margin. But the data does not support it. Our scoring engine surfaces that signal, regardless of how interesting the niche sounds on paper.
Why the Threshold is 65, Not 70 or 80
We set our VALIDATED threshold at 65 after extensive calibration.
Originally we used 70 as the threshold. At that level, only 52 niches in our database would be validated — approximately 2.3% of all scored niches. That felt too restrictive. Not because 52 is an embarrassingly small number (it is not), but because we were finding niches that scored between 65 and 69 that our research team consistently agreed were genuinely viable. The 65-69 band had real signal, not marginal signal.
We lowered the threshold to 65 in early March 2026 after analyzing the 89 niches in that band. What we found: niches scoring 65-69 tended to be strong on 3-4 dimensions but had one weak link — often timing (trend is solid but not accelerating) or GTM (there is a path, but it requires more creativity). These are addressable weaknesses, not disqualifying ones.
At 80, the threshold would be so restrictive as to be unhelpful — 8 niches is not a useful product feature. At 60, we start admitting niches with genuinely mixed signals that we cannot confidently recommend. 65 is where we find niches we are comfortable staking our reputation on.
The threshold is also meaningful as a quality signal: if a niche you are interested in does not score 65, that is not necessarily a death sentence. It might be a early-stage niche where timing has not caught up yet. But it is an honest read on the current state of that market.
The Cost Structure: $0.06 Per Niche
One question we get from technically curious founders: what does it cost to run this system?
The answer is approximately $0.06 per niche scored. Here is the breakdown:
- ScrapeCreators (6 social platform calls): ~$0.012
- Google Search API (3 calls): ~$0.006
- DataForSEO Trends: ~$0.009
- DataForSEO Keywords: ~$0.020
- Compute and overhead: ~$0.013
At 40 niches per hour and 24/7 operation, that is approximately $57.60 per day in data costs for continuous scoring operations. It is not a trivial number, but it is the honest cost of running a real system with real data rather than a blog post with a manually curated list.
The data infrastructure compounds over time. Our 16,907 evidence data points do not disappear — they are stored, indexed, and available for re-analysis as our models improve. When we released version 3 of the scoring engine, we were able to rescore all 2,305 existing niches against the new model in a single batch operation, because all the underlying data was still there.
What Competitors Are Not Measuring
To close the methodology discussion, it is worth being explicit about what our scoring captures that alternatives do not.
IdeaBrowser.com offers human curation with tag-based filtering. The curation quality is genuinely high — Greg Isenberg has a good eye. But the signals are not quantified, the data does not refresh, and there is no formal model telling you why a niche made the list. You are buying editorial judgment, not a scoring system.
Ideagrape focuses on WTP (willingness-to-pay) signals drawn primarily from social media mentions. This is valuable but narrows the picture. WTP tells you demand exists; it does not tell you about competition intensity, timing, technical feasibility, or GTM channel availability. A niche with high WTP and terrible keyword dynamics and no communities to seed is still a hard business to build.
Manual research methods — reading subreddits, pulling Ahrefs data, doing keyword research yourself — are what we do at scale. A founder spending a weekend doing manual niche research might pull data from 3-4 platforms and spend 8-12 hours on a single niche. We do 11 platforms in 90 seconds, with consistent methodology across 2,305 niches, for $0.06 each.
The comparison is not "our system vs. manual research." Manual research, done deeply, will always surface nuances our system misses. The comparison is "our system vs. no systematic research" — which is where most founders are when they evaluate niche ideas. They read a list, get excited, and skip the data work. We are building the infrastructure that makes the data work unnecessary to do yourself.
What We Are Still Building
Transparency means acknowledging limitations.
Our YouTube data currently lacks per-video view counts for the community signals path (a ScrapeCreators API limitation we are working around). Our timing scores are strong on Google Trends data but less nuanced on the qualitative dimension of "regulatory or technology shift creating new opportunity" — that kind of signal is hard to automate.
We are also aware that our scoring model is calibrated against historical data from niches that went through a specific window of time. Market conditions shift. A model trained on 2024-2025 niche data may not perfectly capture the dynamics of an AI-disrupted 2026-2027 market. We are actively working on a version 4 that incorporates more leading indicators and builds in explicit uncertainty ranges rather than single-point scores.
And we are building out 78 research skills — modules that go beyond scoring into active analysis. Skills like value_ladder (mapping out pricing tiers from free to premium), buyer_playbook (mapping acquisition channels to customer personas), and market_gap_generator (identifying underserved segments within a validated niche) run after a niche passes the 65 threshold and starts generating detailed research reports. That is the next layer: not just "is this niche good?" but "here is exactly how to attack it."
The Bottom Line
The scoring methodology we have built is not perfect. No model is. But it is honest, documented, grounded in real data from 11 platforms, and — most importantly — calibrated to say no.
A system that validates 80% of niches is not a scoring system. It is a compliment machine. A system that validates 6.1% of niches is making a real claim about the world: most niche ideas, evaluated honestly against real market data, do not have the combination of signals needed to justify building a product.
The 141 niches in our VALIDATED tier are the ones where the data converges — where YouTube says the audience is engaged, Reddit says the pain is real, DataForSEO says the keywords are winnable, Google Trends says the timing is right, and our feasibility scoring says a small team can actually build something competitive. That convergence is rare. It should be.
If you want to explore the full scoring breakdown on any niche in our database — including the raw platform data behind each dimension score — MicroNicheBrowser has that. Free users get access to overall scores. Pro users get the full dimension breakdown, the evidence wall, and the 78-skill research report. Enterprise users get JSON export, which means you can pipe our scoring output directly into your own LLM-driven research and planning workflow.
The algorithm is no longer a black box. Now you know exactly what is inside it.
MicroNicheBrowser scores 2,305+ niches across 11 platforms using a weighted, log-curve scoring model. The VALIDATED threshold of 65 means only niches with genuine, converging signals across opportunity, problem intensity, feasibility, timing, and go-to-market clarity are recommended.
Browse Validated Niches — See Methodology in Action — Start Your Free Trial
Every niche score on MicroNicheBrowser uses data from 11 live platforms. See our scoring methodology →