Frequently asked
The questions procurement asks first.
Built for the concerns procurement, IT, and faculty raise before anyone signs anything.
Is this another black-box AI making admissions decisions?+
No. AutoEnrol produces recommendations, not decisions - your team validates every one. Recommendations are generated by rules derived from your handbook, so the same inputs produce the same outputs every time. AI is used only where it is strongest and safest: reading documents, extracting fields, and drafting rationale. The rule logic itself is explicit, reproducible, and auditable.
Will we have to change our admissions policy to use AutoEnrol?+
No. Your handbook is the source of truth. We encode your thresholds, your prerequisites, your country equivalencies, your exceptions - not a vendor scorecard. Your standards stay yours.
What happens if a regulator asks us to justify a specific recommendation?+
Every recommendation carries a full rule-by-rule audit trail, a human-readable rationale, and bounding-box traceability back to the source documents. The explanation is generated alongside the recommendation - not reconstructed afterwards.
Does this replace our admissions evaluators?+
No. The system proposes; your team validates. AutoEnrol handles the rule-bound baseline - low-confidence cases, policy exceptions, and edge credentials are flagged for closer review - so experienced evaluators spend their time on the cases that actually need institutional judgement. It sits under the team, not above it.
How do you handle credentials from countries we rarely see?+
Native coverage of 100+ countries and credential systems, plus a validation path for anything the engine flags as low-confidence. The system is designed to admit what it does not know.
Many of our admissions rules aren’t written down anywhere. Is that a problem?+
That’s the norm, not the exception. Most institutions have rules that live in individual evaluators’ heads. We work with your team to surface them and put them in one place - usually the first time your whole admissions rule set has ever been visible as a single artifact. Most teams describe the experience as relief.
Why not build this in-house? We have engineers.+
Some institutions have tried. One large provider reached roughly 8,500 rules before the project was paused; another abandoned a two-year build. The rule surface is deeper than it looks - country equivalencies, program-specific overrides, and edge-credential logic multiply faster than most roadmaps anticipate. AutoEnrol is the off-ramp when your engineers’ time would be better spent on the rest of your stack.
How long does implementation take?+
Published entry criteria to working rules in days, not months. Pilots run in 2–4 weeks with no production integration required.
Where does our data live and who can see it?+
Regional data residency (AU, EU, UK, US). ISO 27001-certified controls. Pilots run on fully de-identified data. Our Trust Center hosts DPA and SOC-style documentation on request.
What if your rules engine disagrees with our evaluator?+
Side-by-side benchmarking is a first-class feature, not an afterthought. The 150-app pilot is explicitly designed to surface every divergence so your team can decide whether to adjust the rule or override the case.