Ask me what skills you need
What are you building?
Tell me what you're working on and I'll find the best agent skills for you.
Lists all inference providers offered during NemoClaw onboarding. Use when explaining which providers are available, what the onboard wizard presents, or how inference routing works. Trigger keywords - nemoclaw inference options, nemoclaw onboarding providers, nemoclaw inference routing, switch nemoclaw inference model, change inference runtime, nemoclaw local inference, ollama nemoclaw, vllm nemoclaw, local model server, openai compatible endpoint.
PATH.Change the active inference model while the sandbox is running. No restart is required.
Switching happens through the OpenShell inference route. Use the provider and model that match the upstream you want to use.
$ openshell inference set --provider nvidia-prod --model nvidia/nemotron-3-super-120b-a12b
$ openshell inference set --provider openai-api --model gpt-5.4
$ openshell inference set --provider anthropic-prod --model claude-sonnet-4-6
$ openshell inference set --provider gemini-api --model gemini-2.5-flash
npx skills add NVIDIA/skills --skill nemoclaw-user-configure-inferenceHow clear and easy to understand the SKILL.md instructions are, rated from 1 to 5.
Clear and well structured, with only minor parts that might need a second read.
How directly an agent can act on the SKILL.md instructions, rated from 1 to 5.
Mostly actionable with clear steps; only a few small gaps remain.