Founding AI Research Engineer / Scientist — Institutional World Models, LLMs, Graph ML, MLOps & AI Safety
Company: HYLMAN
Location: Location not specified (Remote)
Type: Full-time
Remote: Yes
Posted: 2026-05-08
About this role
HYLMAN is forming the founding technical team for a new frontier-AI lab initiative focused on
verified world models for governed institutions
.
We are not building another chatbot, RAG wrapper, workflow bot, or generic agent toolchain.
We are working on a new class of AI systems that can model how real organizations change when actions are taken. In enterprise, public-sector, healthcare, financial, industrial, and regulated environments, an action is not successful just because an API call returns 200 OK. It is successful only if the right institutional state changed, the right approvals existed, the right evidence was present, downstream obligations were closed, hidden side effects were acceptable, and policy constraints were satisfied.
The core research direction is
action-conditioned institutional world modeling
: given a typed state graph, policy/evidence context, observability limits, history, and candidate action, the model predicts a verifiable state-diff contract before execution: expected changes, forbidden changes, hidden side effects, open obligations, missing proof, policy violations, delayed risk, and calibrated abstention.
This is a founding team opportunity. We are looking for exceptional people across several deep technical profiles.
What you may work on
You may contribute to one or more of the following:
Building
institutional state graphs
from enterprise events, workflow logs, documents, policies, tickets, CRM records, ITSM records, approvals, evidence, obligations, controls, and system snapshots.
Designing
receipt-backed transition datasets
that bind pre-state, actor, authority, action, evidence, post-state diff, policy result, observability mask, and delayed outcome.
Training and evaluating
action-conditioned transition models
that predict structured state changes, hidden side effects, policy violations, evidence gaps, unresolved obligations, delayed risk, and uncertainty.
Developing
**graph/sequence/foundat...