What is AutoEnrol?+
AutoEnrol is AI admissions software for universities. A language model reads every applicant file, including transcripts, degree certificates, English-test reports, and supporting documents. The institution's own admissions rules, encoded from its handbook, produce a recommendation in seconds with a full rule-by-rule audit trail. Evaluators validate every recommendation, so AutoEnrol sits under the team, not above it.
Who is AutoEnrol for?+
University admissions teams processing international applications, both undergraduate and postgraduate, across many countries and credential systems. It is built for institutions that want to compress the ~45-minute manual review per application to seconds without giving up explainability, reproducibility, or control of their own rule set.
How is AutoEnrol different from a CRM, an SIS, or generic "AI admissions" tools?+
CRMs and SIS platforms manage the application workflow; generic AI admissions tools typically score applicants with an opaque model. AutoEnrol is neither. It is a decision engine that reads the documents, runs the institution's own rules deterministically, and produces a recommendation with a rule-by-rule audit trail. It complements your CRM and SIS rather than replacing them.
What kinds of applications and documents does AutoEnrol handle?+
International undergraduate and postgraduate applications, with native coverage of 100+ countries and credential systems. Supported documents include academic transcripts, degree certificates, English-test reports (IELTS, TOEFL, PTE), GMAT/GRE scores, passports, and statements of purpose, across native scripts, scans, and photographed pages.
Is this another black-box AI making admissions decisions?+
No. AutoEnrol produces recommendations, not decisions, and your team validates every one. Recommendations are generated by rules derived from your handbook, so the same inputs produce the same outputs every time. AI is used only where it is strongest and safest: reading documents, extracting fields, and drafting rationale. The rule logic itself is explicit, reproducible, and auditable.
Will we have to change our admissions policy to use AutoEnrol?+
No. Your handbook is the source of truth. We encode your thresholds, your prerequisites, your country equivalencies, and your exceptions, not a vendor scorecard. Your standards stay yours.
What happens if a regulator asks us to justify a specific recommendation?+
Every recommendation carries a full rule-by-rule audit trail, an easy to understand rationale, and bounding-box traceability back to the source documents. The explanation is generated alongside the recommendation, not reconstructed afterwards.
Does this replace our admissions evaluators?+
No. The system proposes; your team validates. AutoEnrol handles the rule-bound baseline. Low-confidence cases, policy exceptions, and edge credentials are flagged for closer review, so experienced evaluators spend time where institutional judgement is needed the most. It sits under the team, not above it.
How do you handle credentials from countries we rarely see?+
Native coverage of 100+ countries and credential systems, plus a validation path for anything the engine flags as low-confidence. The system is designed to admit what it does not know.
Many of our admissions rules aren’t written down anywhere. Is that a problem?+
That’s the norm, not the exception. Most institutions have rules that live in individual evaluators’ heads. We work with your team to surface them and put them in one place, usually the first time your whole admissions rule set has ever been visible as a single artifact. Most teams describe the experience as relief.
Why not build this in-house? We have engineers.+
Some institutions have tried. One large provider reached roughly 8,500 rules before the project was paused; another abandoned a two-year build. The rule surface is deeper than it looks. Country equivalencies, program-specific overrides, and edge-credential logic multiply faster than most roadmaps anticipate. AutoEnrol is the off-ramp when your engineers’ time would be better spent on the rest of your stack.
How long does implementation take?+
Published entry criteria can be processed into working rules in days, not months. Pilots run in 2–4 weeks with no production integration required.
Where does our data live and who can see it?+
Regional data residency (AU, EU, UK, US). ISO 27001-certified controls. Pilots run on fully de-identified data. Our Trust Center hosts DPA and SOC-style documentation on request.
What if your rules engine disagrees with our evaluator?+
Side-by-side benchmarking is a first-class feature, not an afterthought. The 150-application pilot is explicitly designed to surface every divergence so your team can decide whether to adjust the rule or override the case.