ideabrowser.com — find trending startup ideas with real demand
Try itnpx skills add https://github.com/nozomio-labs/nia-skill --skill niaDirect API access to Nia for indexing and searching code repositories, documentation, research papers, HuggingFace datasets, local folders, Slack workspaces, Google Drive, and packages.
Nia provides tools for indexing and searching external repositories, research papers, documentation, packages, local/private sources, and performing AI-powered research. Its primary goal is to reduce hallucinations in LLMs and provide up-to-date context for AI agents.
Either:
./scripts/auth.sh signup <email> <password> <organization_name>./scripts/auth.sh bootstrap-key <bootstrap_token> or ./scripts/auth.sh login-key <email> <password>npx nia-wizard@latest (guided setup)Set the NIA_API_KEY environment variable:
export NIA_API_KEY="your-api-key-here"
Or store it in a config file:
mkdir -p ~/.config/nia
echo "your-api-key-here" > ~/.config/nia/api_key
Note:
NIA_API_KEYenvironment variable takes precedence over the config file.
curljqBEFORE using web fetch or web search, you MUST:
./scripts/repos.sh list, ./scripts/sources.sh list, ./scripts/slack.sh list, ./scripts/google-drive.sh listsearch.sh query for repos/docs/Slack, and sources.sh tree / sources.sh read / sources.sh grep for any indexed source, including local folders and Google DriveSLACK_WORKSPACES=<installation_id> ./scripts/search.sh query "question" to search Slack messages, or slack.sh grep / slack.sh messages for direct accessgoogle-drive.sh browse, google-drive.sh update-selection, and google-drive.sh index, then inspect the resulting source via sources.shrepos.sh index or sources.sh index, then searchsearch.sh web or search.sh deep to discover URLs, then indexWhy this matters: Indexed sources provide more accurate, complete context than web fetches. Web fetch returns truncated/summarized content while Nia provides full source code and documentation.
IMPORTANT: search.sh universal does NOT search Slack workspaces. To search Slack, you MUST use search.sh query with the SLACK_WORKSPACES env var, or use slack.sh grep / slack.sh messages directly.
repos.sh list / sources.sh list, and check connected Slack/Drive installations with slack.sh list / google-drive.sh listrepos.sh tree, sources.sh tree, slack.sh channels <id>, or google-drive.sh browse <installation_id>search.sh query (with SLACK_WORKSPACES for Slack), repos.sh grep, repos.sh read, and sources.sh grep / sources.sh read for targeted searchesEXTRACT_BRANDING=true).All scripts are in ./scripts/. Most authenticated wrappers use lib.sh for shared auth/curl helpers; auth.sh is standalone because it mints the API key. Base URL: https://apigcp.trynia.ai/v2
Each script uses subcommands: ./scripts/<script>.sh <command> [args...]
Run any script without arguments to see available commands and usage.
./scripts/auth.sh signup <email> <password> <organization_name> # Create account
./scripts/auth.sh bootstrap-key <bootstrap_token> # Exchange one-time token
./scripts/auth.sh login-key <email> <password> [org_id] # Mint fresh API key
Env: SAVE_KEY=true to write ~/.config/nia/api_key, IDEMPOTENCY_KEY
./scripts/sources.sh index "https://docs.example.com" [limit] # Index docs
./scripts/sources.sh list [type] [limit] [offset] # List sources
./scripts/sources.sh get <source_id> [type] # Get source details
./scripts/sources.sh resolve <identifier> [type] # Resolve name/URL to ID
./scripts/sources.sh update <source_id> [display_name] [cat_id] # Update source
./scripts/sources.sh delete <source_id> [type] # Delete source
./scripts/sources.sh sync <source_id> [type] # Re-sync source
./scripts/sources.sh rename <source_id_or_name> <new_name> # Rename source
./scripts/sources.sh subscribe <url> [source_type] [ref] # Subscribe to global source
./scripts/sources.sh read <source_id> [path] # Read content
./scripts/sources.sh grep <source_id> <pattern> [path] # Grep content
./scripts/sources.sh tree <source_id> # Get file tree
./scripts/sources.sh ls <source_id> # Shallow tree view
./scripts/sources.sh classification <source_id> [type] # Get/update classification
./scripts/sources.sh curation <source_id> [type] # Get trust/overlay/annotations
./scripts/sources.sh update-curation <source_id> [type] # Update trust/overlay
./scripts/sources.sh annotations <source_id> [type] # List annotations
./scripts/sources.sh add-annotation <source_id> <content> [kind] # Create annotation
./scripts/sources.sh update-annotation <source_id> <annotation_id> [content] [kind] # Update annotation
./scripts/sources.sh delete-annotation <source_id> <annotation_id> [type] # Delete annotation
./scripts/sources.sh assign-category <source_id> <cat_id|null> # Assign category
./scripts/sources.sh upload-url <filename> # Get signed URL for file upload (PDF, CSV, TSV, XLSX, XLS)
./scripts/sources.sh bulk-delete <id:type> [id:type ...] # Bulk delete resources
Index environment variables: DISPLAY_NAME, FOCUS, EXTRACT_BRANDING, EXTRACT_IMAGES, IS_PDF, IS_SPREADSHEET, URL_PATTERNS, EXCLUDE_PATTERNS, MAX_DEPTH, WAIT_FOR, CHECK_LLMS_TXT, LLMS_TXT_STRATEGY, INCLUDE_SCREENSHOT, ONLY_MAIN_CONTENT, ADD_GLOBAL, MAX_AGE
List environment variables: STATUS, QUERY, CATEGORY_ID
Generic source env: TYPE=<repository|documentation|research_paper|huggingface_dataset|local_folder|slack|google_drive>, BRANCH, URL, PAGE, TREE_NODE_ID, LINE_START, LINE_END, MAX_LENGTH, MAX_DEPTH, SYNC_JSON
Classification update env: ACTION=update, CATEGORIES=cat1,cat2, INCLUDE_UNCATEGORIZED=true|false
Curation update env: TRUST_LEVEL (low|medium|high), OVERLAY_KIND (custom|nia_verified), OVERLAY_SUMMARY, OVERLAY_GUIDANCE, RECOMMENDED_QUERIES (csv), CLEAR_OVERLAY=true|false
Grep environment variables: CASE_SENSITIVE, WHOLE_WORD, FIXED_STRING, OUTPUT_MODE, HIGHLIGHT, EXHAUSTIVE, LINES_AFTER, LINES_BEFORE, MAX_PER_FILE, MAX_TOTAL
Flexible identifiers: Most endpoints accept UUID, display name, or URL:
550e8400-e29b-41d4-a716-446655440000Vercel AI SDK - Core, openai/gsm8khttps://docs.trynia.ai/, https://arxiv.org/abs/2312.00752./scripts/repos.sh index <owner/repo> [branch] [display_name] # Index repo (ADD_GLOBAL=false to keep private)
./scripts/repos.sh list # List indexed repos
./scripts/repos.sh status <owner/repo> # Get repo status
./scripts/repos.sh read <owner/repo> <path/to/file> # Read file
./scripts/repos.sh grep <owner/repo> <pattern> [path_prefix] # Grep code (REF= for branch)
./scripts/repos.sh tree <owner/repo> [branch] # Get file tree
./scripts/repos.sh delete <repo_id> # Delete repo
./scripts/repos.sh rename <repo_id> <new_name> # Rename display name
Tree environment variables: MAX_DEPTH, INCLUDE_PATHS, EXCLUDE_PATHS, FILE_EXTENSIONS, EXCLUDE_EXTENSIONS, SHOW_FULL_PATHS
./scripts/search.sh query <query> <repos_csv> [docs_csv] # Query specific repos/sources
./scripts/search.sh universal <query> [top_k] # Search ALL indexed sources
./scripts/search.sh web <query> [num_results] # Web search
./scripts/search.sh deep <query> [output_format] # Deep research (Pro)
query — targeted search with AI response and sources. Env: LOCAL_FOLDERS, SLACK_WORKSPACES, CATEGORY, MAX_TOKENS, STREAM, INCLUDE_SOURCES, FAST_MODE, SKIP_LLM, REASONING_STRATEGY (vector|tree|hybrid), MODEL, SEARCH_MODE, BYPASS_CACHE, SEMANTIC_CACHE_THRESHOLD, INCLUDE_FOLLOW_UPS, TRUST_MINIMUM_TIER, TRUST_VERIFIED_ONLY, TRUST_REQUIRE_OVERLAY. Slack filters: SLACK_CHANNELS, SLACK_USERS, SLACK_DATE_FROM, SLACK_DATE_TO, SLACK_INCLUDE_THREADS. Local source filters: SOURCE_SUBTYPE, DB_TYPE, CONNECTOR_TYPE, CONVERSATION_ID, CONTACT_ID, SENDER_ROLE, TIME_AFTER, TIME_BEFORE. This is the only search command that supports Slack.
universal — hybrid vector + BM25 across all indexed public sources (repos + docs + HF datasets). Does NOT include Slack. Env: INCLUDE_REPOS, INCLUDE_DOCS, INCLUDE_HF, ALPHA, COMPRESS, MAX_TOKENS, MAX_SOURCES, SOURCES_FOR_ANSWER, BYPASS_CACHE, SEMANTIC_CACHE_THRESHOLD, BOOST_LANGUAGES, EXPAND_SYMBOLS
web — web search. Env: CATEGORY (github|company|research|news|tweet|pdf|blog), DAYS_BACK, FIND_SIMILAR_TO
deep — deep AI research (Pro). Env: VERBOSE
./scripts/oracle.sh run <query> [repos_csv] [docs_csv] # Run research (synchronous)
./scripts/oracle.sh job <query> [repos_csv] [docs_csv] # Create async job (recommended)
./scripts/oracle.sh job-status <job_id> # Get job status/result
./scripts/oracle.sh job-stream <job_id> # Stream async job updates
./scripts/oracle.sh job-cancel <job_id> # Cancel running job
./scripts/oracle.sh jobs-list [status] [limit] # List jobs
./scripts/oracle.sh sessions [limit] # List research sessions
./scripts/oracle.sh session-detail <session_id> # Get session details
./scripts/oracle.sh session-messages <session_id> [limit] # Get session messages
./scripts/oracle.sh session-chat <session_id> <message> # Follow-up chat (SSE stream)
./scripts/oracle.sh session-delete <session_id> # Delete session and messages
./scripts/oracle.sh 1m-usage # Get daily 1M context usage
Environment variables: OUTPUT_FORMAT, MODEL (claude-opus-4-6|claude-sonnet-4-5-20250929|...)
Autonomous agent for searching GitHub repositories without indexing. Delegates to specialized sub-agents for faster, more thorough results. Supports fast mode (Haiku) and deep mode (Opus with 1M context).
./scripts/tracer.sh run <query> [repos_csv] [context] [mode] # Create Tracer job
./scripts/tracer.sh status <job_id> # Get job status/result
./scripts/tracer.sh stream <job_id> # Stream real-time updates (SSE)
./scripts/tracer.sh list [status] [limit] # List jobs
./scripts/tracer.sh delete <job_id> # Delete job
Environment variables: MODEL (claude-haiku-4-5-20251001|claude-opus-4-6|claude-opus-4-6-1m), TRACER_MODE (fast|slow)
Example workflow:
# 1. Start a search
./scripts/tracer.sh run "How does streaming work in generateText?" vercel/ai "Focus on core implementation" slow
# Returns: {"job_id": "abc123", "session_id": "def456", "status": "queued"}
# 2. Stream progress
./scripts/tracer.sh stream abc123
# 3. Get final result
./scripts/tracer.sh status abc123
Use Tracer when:
./scripts/slack.sh install # Generate Slack OAuth URL
./scripts/slack.sh callback <code> [redirect_uri] # Exchange OAuth code for tokens
./scripts/slack.sh register-token <xoxb-token> [name] # Register external bot token (BYOT)
./scripts/slack.sh list # List Slack installations
./scripts/slack.sh get <installation_id> # Get installation details
./scripts/slack.sh delete <installation_id> # Disconnect workspace
./scripts/slack.sh channels <installation_id> # List available channels
./scripts/slack.sh configure-channels <inst_id> [mode] # Configure channels to index
./scripts/slack.sh grep <installation_id> <pattern> [channel] # BM25 search indexed messages
./scripts/slack.sh index <installation_id> # Trigger full re-index
./scripts/slack.sh messages <installation_id> [channel] [limit] # Read recent messages (live)
./scripts/slack.sh status <installation_id> # Get indexing status
configure-channels env: INCLUDE_CHANNELS (csv of channel IDs), EXCLUDE_CHANNELS (csv)
install env: REDIRECT_URI, SCOPES (csv)
Workflow:
slack.sh install → get OAuth URL → user authorizes → slack.sh callback <code>slack.sh register-token xoxb-your-token "My Workspace"slack.sh channels <id> → see available channelsslack.sh configure-channels <id> selected with INCLUDE_CHANNELS=C01,C02slack.sh index <id> → trigger indexingslack.sh grep <id> "search term" → search indexed messagesSLACK_WORKSPACES=<id> ./scripts/search.sh query "question"./scripts/google-drive.sh install [redirect_uri] # Generate Google OAuth URL
./scripts/google-drive.sh callback <code> [redirect_uri] # Exchange OAuth code
./scripts/google-drive.sh list # List Drive installations
./scripts/google-drive.sh get <installation_id> # Get installation details
./scripts/google-drive.sh delete <installation_id> # Disconnect Drive
./scripts/google-drive.sh browse <installation_id> [folder_id] # Browse files/folders
./scripts/google-drive.sh selection <installation_id> # Get selected items
./scripts/google-drive.sh update-selection <id> <item_ids_csv> # Set selected items
./scripts/google-drive.sh index <id> [file_ids] [folder_ids] # Trigger indexing
./scripts/google-drive.sh status <installation_id> # Get index/sync status
./scripts/google-drive.sh sync <installation_id> [scope_ids_csv] # Trigger sync
install env: REDIRECT_URI, SCOPES (csv)
index env: FILE_IDS, FOLDER_IDS, DISPLAY_NAME
sync env: FORCE_FULL=true, SCOPE_IDS
./scripts/github.sh glob <owner/repo> <pattern> [ref] # Find files matching glob
./scripts/github.sh read <owner/repo> <path> [ref] [start] [end] # Read file with line range
./scripts/github.sh search <owner/repo> <query> [per_page] [page]# Code search (GitHub API)
./scripts/github.sh tree <owner/repo> [ref] [path] # Get file tree
Rate limited to 10 req/min by GitHub for code search. For indexed repo operations use repos.sh. For autonomous research use tracer.sh.
./scripts/papers.sh index <arxiv_url_or_id> # Index paper
./scripts/papers.sh list # List indexed papers
Supports: 2312.00752, https://arxiv.org/abs/2312.00752, PDF URLs, old format (hep-th/9901001), with version (2312.00752v1). Env: ADD_GLOBAL, DISPLAY_NAME
./scripts/datasets.sh index <dataset> [config] # Index dataset
./scripts/datasets.sh list # List indexed datasets
Supports: squad, dair-ai/emotion, https://huggingface.co/datasets/squad. Env: ADD_GLOBAL
./scripts/packages.sh grep <registry> <package> <pattern> [ver] # Grep package code
./scripts/packages.sh hybrid <registry> <package> <query> [ver] # Semantic search
./scripts/packages.sh read <reg> <pkg> <sha256> <start> <end> # Read file lines
Registry: npm | py_pi | crates_io | golang_proxy | ruby_gems
Grep env: LANGUAGE, CONTEXT_BEFORE, CONTEXT_AFTER, OUTPUT_MODE, HEAD_LIMIT, FILE_SHA256
Hybrid env: PATTERN (regex pre-filter), LANGUAGE, FILE_SHA256
./scripts/categories.sh list [limit] [offset] # List categories
./scripts/categories.sh create <name> [color] [order] # Create category
./scripts/categories.sh update <cat_id> [name] [color] [order] # Update category
./scripts/categories.sh delete <cat_id> # Delete category
./scripts/categories.sh assign <source_id> <cat_id|null> # Assign/remove category
./scripts/contexts.sh save <title> <summary> <content> <agent> # Save context
./scripts/contexts.sh list [limit] [offset] # List contexts
./scripts/contexts.sh search <query> [limit] # Text search
./scripts/contexts.sh semantic-search <query> [limit] # Vector search
./scripts/contexts.sh get <context_id> # Get by ID
./scripts/contexts.sh update <id> [title] [summary] [content] # Update context
./scripts/contexts.sh delete <context_id> # Delete context
Save env: TAGS (csv), MEMORY_TYPE (scratchpad|episodic|fact|procedural), TTL_SECONDS, ORGANIZATION_ID, METADATA_JSON, NIA_REFERENCES_JSON, EDITED_FILES_JSON, LINEAGE_JSON
List env: TAGS, AGENT_SOURCE, MEMORY_TYPE
./scripts/deps.sh analyze <manifest_file> # Analyze dependencies
./scripts/deps.sh subscribe <manifest_file> [max_new] # Subscribe to dep docs
./scripts/deps.sh upload <manifest_file> [max_new] # Upload manifest (multipart)
Supports: package.json, requirements.txt, pyproject.toml, Cargo.toml, go.mod, Gemfile. Env: INCLUDE_DEV
/sources Wrapper)./scripts/folders.sh create /path/to/folder [display_name] # Create from local dir
./scripts/folders.sh create-db <database_file> [display_name] # Create from DB file
./scripts/folders.sh list [limit] [offset] # List folders
./scripts/folders.sh get <folder_id> # Get details
./scripts/folders.sh delete <folder_id> # Delete folder
./scripts/folders.sh rename <folder_id> <new_name> # Rename folder
./scripts/folders.sh tree <folder_id> # Get file tree
./scripts/folders.sh ls <folder_id> # Shallow tree view
./scripts/folders.sh read <folder_id> <path> # Read file
./scripts/folders.sh grep <folder_id> <pattern> [path_prefix] # Grep files
./scripts/folders.sh classify <folder_id> [categories_csv] # AI classification
./scripts/folders.sh classification <folder_id> # Get classification
./scripts/folders.sh sync <folder_id> /path/to/folder # Re-sync from local
./scripts/folders.sh assign-category <folder_id> <cat_id|null> # Assign/remove category
Env: STATUS, QUERY, CATEGORY_ID, MAX_DEPTH, INCLUDE_UNCATEGORIZED
./scripts/advisor.sh "query" file1.py [file2.ts ...] # Get code advice
Analyzes your code against indexed docs. Env: REPOS (csv), DOCS (csv), OUTPUT_FORMAT (explanation|checklist|diff|structured)
./scripts/usage.sh # Get usage summary
https://apigcp.trynia.ai/v2| Type | Index Command | Identifier Examples |
|---|---|---|
| Repository | repos.sh index | owner/repo, microsoft/vscode |
| Documentation | sources.sh index | https://docs.example.com |
| Research Paper | papers.sh index | 2312.00752, arXiv URL |
| HuggingFace Dataset | datasets.sh index | squad, owner/dataset |
| Local Folder | folders.sh create | UUID, display name (private, user-scoped) |
| Google Drive | google-drive.sh install + index | installation ID, source ID |
| Slack | slack.sh register-token / OAuth | installation ID |
For search.sh query:
repositories — Search GitHub repositories only (auto-detected when only repos passed)sources — Search data sources only (auto-detected when only docs passed)unified — Search both (default when both passed)Pass sources via:
repositories arg: comma-separated "owner/repo,owner2/repo2"data_sources arg: comma-separated "display-name,uuid,https://url"LOCAL_FOLDERS env: comma-separated "folder-uuid,My Notes"SLACK_WORKSPACES env: comma-separated installation IDs