Exner(CS) 체계 로샤 검사 구조요약 계산기입니다.본 서비스는 전문가의 임상 판단을 대체하지 않습니다.

About

This web app was created to support Structural Summary calculation in the Exner (CS) system for the Rorschach test in a more stable and efficient way. It helps clinicians, trainees, and students organize the scoring elements they repeatedly enter and review location, determinants, form quality, special scores, and major indices in one workflow. Structural Summary calculation and reference-document search are available even without signing in.

The app also includes reference documents that can be consulted during coding and Structural Summary work. These materials organize rules, concepts, variables, and coding standards for the Rorschach test and the Exner (CS) system by topic, and they were created and curated jointly by the Seoul Institute of Clinical Psychology (SICP) and MOW. Users can search these documents directly whenever they need to verify a concept or rule.

Signed-in users can use AI features by registering their own OpenAI or Google API key. Here, BYOK means “Bring Your Own Key,” or more simply, that the user connects and uses their own API key instead of relying on a service-provided one. We chose this approach because API-based use is policy-protected in ways that matter for sensitive psychological-test material, and the workshop material behind this project emphasized that API-submitted data is protected from model-training use.

The service also makes its reference documents public for the same design reason. We wanted clinicians, trainees, and students to be able to see what standards the AI is relying on and to review its responses critically. In that sense, the AI is meant to function less like a hidden black box and more like a support tool whose underlying standards remain open to human review.

This service is jointly operated by SICP and MOW. SICP provides clinical guidance and overall direction, while MOW handles product implementation and technical operations. For service operation, bug reports, or privacy-related questions, please contact sicpseoul@gmail.com. This service and its AI features are support tools only. They do not replace independent professional clinical judgment or formal diagnosis, and final interpretation and responsibility always remain with the user.

Human-in-the-Loop: A Service Designed Around Human-Centered AI Principles

This service was designed not to let AI make judgments in place of professionals, but to help clinicians and trainees review and decide from clearer standards. The content below summarizes the framework based on five ethical principles that guided the design of the app’s AI features and operating model.

For more details on human-centered AI and HITL, see the following link. Watch the workshop video.

1. Autonomy & Informed Consent

Users should be told in clear language why AI is being used, how their data is handled, and what the tool can and cannot do. In sensitive psychological contexts, understandable notice matters as much as technical functionality.

Users should also remain free to refuse or stop AI use, and meaningful human-led alternatives should remain available. AI should expand options, not quietly take control away from the person using the service.

2. Beneficence & Non-Maleficence

AI may improve efficiency, but it can also be inaccurate or oversimplified. Its output therefore needs critical review, and higher-risk judgment must remain with a human professional rather than being handed off to automation.

Scientific validity and cultural appropriateness matter as well. A tool should be used not just because it is convenient, but because it contributes real benefit without introducing avoidable harm.

3. Confidentiality, Privacy, & Transparency

Psychological-test material is sensitive, so data handling should stay minimal, controlled, and well protected. One reason this app adopts BYOK is that API-based use is policy-protected and structured so submitted data is not used for model training.

Transparency also means that users should be able to inspect what the AI is relying on. That is why the service openly publishes the reference documents the AI uses instead of hiding its standards inside a black box.

4. Justice, Fairness, & Inclusiveness

AI systems should be reviewed for cultural bias, structural unfairness, or uneven performance across groups. In psychological settings, convenience is never enough if a tool might provide less fair support to some people than to others.

Fairness therefore has to be treated as an ongoing review practice. The service is designed to help users continue to question and examine outputs rather than accept automated suggestions uncritically.

5. Professional Integrity & Accountability

AI tools are appropriate only when the person using them has enough training to interpret the output responsibly. The key question is not simply whether AI is present, but whether its results are handled within sound professional judgment.

For that reason, this service is not designed to take over final judgment. Responsibility remains with the human user, who is expected to combine technical support with professional and ethical standards.