đHello, weâre Instrumentl. Weâre a mission-driven startup helping the nonprofit sector to drive impact, and weâre well on our way to becoming the #1 most-loved grant discovery and management tool.Â
About us: Instrumentl is a hypergrowth YC-backed startup with over 5,000 nonprofit clients, from local homeless shelters to larger organizations like the San Diego Zoo and the University of Alaska. We are building the future of fundraising automation, helping nonprofits to discover, track, and manage grants efficiently through our SaaS platform.
Our charts are dramatically up-and-to-the-right đ â weâre cash flow positive and doubling year-over-year, with customers who love us (NPS is 65+ and Ellis PMF survey is 60+). Join us on this rocket ship to Mars!
About the Role
As an AI Data Engineer at Instrumentl, you'll own the systems that discover, acquire, and transform unstructured content into clean, structured, queryable data. You'll build automated content discovery from the web, design LLM-powered extraction pipelines that convert grant documents, foundation profiles, and third-party data into canonical business objectsâenabling our product teams to build intelligent features on a reliable data foundation. This is a data platform role: you'll own the extraction quality that populates our canonical data models and the pipeline reliability that keeps them current. You'll build evaluation harnesses, optimize for cost at scale, and ensure our AI-derived data is accurate enough to trust. You'll be part of the AI Engineering team, partnering closely with product engineers who consume your data products.
đHello, weâre Instrumentl. Weâre a mission-driven startup helping the nonprofit sector to drive impact, and weâre well on our way to becoming the #1 most-loved grant discovery and management tool.Â
About us: Instrumentl is a hypergrowth YC-backed startup with over 5,000 nonprofit clients, from local homeless shelters to larger organizations like the San Diego Zoo and the University of Alaska. We are building the future of fundraising automation, helping nonprofits to discover, track, and manage grants efficiently through our SaaS platform.
Our charts are dramatically up-and-to-the-right đ â weâre cash flow positive and doubling year-over-year, with customers who love us (NPS is 65+ and Ellis PMF survey is 60+). Join us on this rocket ship to Mars!
About the Role
As an AI Data Engineer at Instrumentl, you'll own the systems that discover, acquire, and transform unstructured content into clean, structured, queryable data. You'll build automated content discovery from the web, design LLM-powered extraction pipelines that convert grant documents, foundation profiles, and third-party data into canonical business objectsâenabling our product teams to build intelligent features on a reliable data foundation. This is a data platform role: you'll own the extraction quality that populates our canonical data models and the pipeline reliability that keeps them current. You'll build evaluation harnesses, optimize for cost at scale, and ensure our AI-derived data is accurate enough to trust. You'll be part of the AI Engineering team, partnering closely with product engineers who consume your data products.
What youâll do
Build content discovery pipelines: Automate discovery and acquisition of grant-related content from the webâfoundation websites, RFPs, program announcementsâturning the open web into structured, actionable data.Build LLM extraction pipelines: Implement production pipelines to transform unstructured text into canonical business objectsâincluding document ingestion (PDFs, HTML, Word), OCR, table extraction, and layout-aware parsing. Partner with product engineers to evolve schemas as domain needs change.Own semantic chunking and embeddings: Design chunking strategies optimized for retrieval; select and manage embedding models; maintain vector indices that power downstream search and RAG features.  Optimize for cost and latency: Profile token usage, implement caching and batching strategies, choose appropriate models for different tasks, and manage the cost/quality tradeoff at scale.Maintain data quality and serve downstream consumers: Implement validation, anomaly detection, and alerting for extraction drift. Expose clean data via APIs, materialized views, or event streams that product teams can rely on without understanding the extraction complexity. Integrate and normalize data from external providersâresolving entities, mapping to internal schemas, and ensuring "Ford Foundation" and "The Ford Foundation" resolve to the same canonical record.What we're looking for
Software engineering background: 5+ years of professional software engineering experience, including 2+ years working with modern LLMs (as an IC). Startup experience and comfort operating in fast, scrappy environments is a plus.Proven production impact: Youâve taken LLM/RAG systems from prototype to production, owned reliability/observability, and iterated postâlaunch based on evals and user feedback.LLM agentic systems: Experience building tool/functionâcalling workflows, planning/execution loops, and safe tool integrations (e.g., with LangChain/LangGraph, LlamaIndex, Semantic Kernel, or custom orchestration).RAG expertise: Strong grasp of document ingestion, chunking/windowing, embeddings, hybrid search (keyword + vector), reâranking, and grounded citations.Experience with reârankers/crossâencoders, hybrid retrieval tuning, or search/recommendation systems.Embeddings & vector stores: Handsâon with embedding model selection/versioning and vector DBs (e.g., pgvector, FAISS, Pinecone, Weaviate, Milvus, Qdrant). Document processing at scale (PDF parsing/OCR), structured extraction with JSON schemas, and schemaâguided generation.Evaluation mindset: Comfort designing eval suites (RAG/QA, extraction, summarization), using automated and humanâinâtheâloop methods; familiarity with frameworks like Ragas/DeepEval/OpenAI Evals or equivalent.Infrastructure & languages: Proficiency in Python (FastAPI, Celery) and TypeScript/Node; familiarity with Ruby on Rails (our core platform) or willingness to learn. Experience with AWS/GCP, Docker, CI/CD, and observability (logs/metrics/traces).Data chops: Comfortable with SQL, schema design, and building/maintaining data pipelines that power retrieval and evaluationCollaborative approach: You thrive in a crossâfunctional environment and can translate researchy ideas into shippable, userâfriendly features.Resultsâdriven: Bias for action and ownership with an eye for speed, quality, and simplicity.
Nice to have:
Fineâtuning: Practical experience with SFT/LoRA or instructionâtuning (and good intuition for when fineâtuning vs. prompting vs. model choice is the right lever).Exposure to openâsource LLMs (e.g., Llama) and providers (e.g., OpenAI, Anthropic, Google, Mistral).Familiarity with responsible AI, redâteaming, and domainâspecific safety policies.
Why Youâll Love Working Here:
Join a mission-driven, product-led team that values curiosity, collaboration, and clear outcomesWork closely with leaders who believe in bold ideas, fast learning, and empowering people to do their best workPlay a direct role in shaping the team and culture that will take Instrumentl to its next stage of growth
Compensation & Benefits
For US-based candidates, our target salary band is $175,000 - $220,000 USD + equity. Salary decisions consider experience, location, and technical depth100% covered health, dental, and vision insurance for employees (50% for dependents)Generous PTO, including parental leave 401(k) Company laptop and home-office stipendBi-Annual Company Retreats for in-person collaboration
Instrumentl is evolving rapidly. Youâll always have new challenges and opportunities to grow here.