Scénario de test & Cas d'usage
Bias detection and mitigation in AI models.
Discover all actions of fairAIToolsCreate a scored dataset of loan applicants. The data includes a sensitive variable 'GENDER', the actual outcome 'LOAN_DEFAULT', and the model's predicted probability 'P_DEFAULT'.
| 1 | DATA mycas.loan_applicants_scored; |
| 2 | CALL STREAMINIT(123); |
| 3 | ARRAY GENDERS[3] $ 8 ('Male', 'Female', 'Other'); |
| 4 | DO i = 1 TO 2000; |
| 5 | GENDER = GENDERS[RAND('INTEGER', 1, 3)]; |
| 6 | INCOME = 40000 + RAND('UNIFORM') * 80000; |
| 7 | CREDIT_SCORE = 500 + RAND('INTEGER', 1, 350); |
| 8 | P_DEFAULT_BASE = 0.1 + (850 - CREDIT_SCORE)/350 * 0.5; |
| 9 | IF GENDER = 'Female' THEN P_DEFAULT = P_DEFAULT_BASE * 1.15; /* Introduce slight bias */ |
| 10 | ELSE IF GENDER = 'Other' THEN P_DEFAULT = P_DEFAULT_BASE * 0.95; |
| 11 | ELSE P_DEFAULT = P_DEFAULT_BASE; |
| 12 | LOAN_DEFAULT = (RAND('UNIFORM') < P_DEFAULT); |
| 13 | P_DEFAULT = MIN(0.99, MAX(0.01, P_DEFAULT + (RAND('UNIFORM')-0.5)*0.1)); /* Add noise to prediction */ |
| 14 | P_NO_DEFAULT = 1 - P_DEFAULT; |
| 15 | OUTPUT; |
| 16 | END; |
| 17 | RUN; |
| 1 | /* |
| 2 | Data is already created in CAS in the data_prep step */ |
| 1 | PROC CAS; |
| 2 | fairAITools.assessBias |
| 3 | TABLE={name='loan_applicants_scored'}, |
| 4 | response={name='LOAN_DEFAULT'}, |
| 5 | sensitiveVariable={name='GENDER'}, |
| 6 | predictedVariables={{name='P_DEFAULT'}}, |
| 7 | event='1', |
| 8 | referenceLevel='Male', |
| 9 | cutoff=0.4, |
| 10 | scoredTable={name='LOAN_BIAS_RESULTS', replace=true}; |
| 11 | RUN; |
| 12 | QUIT; |
The action should successfully execute and produce several output tables. The 'BiasMetrics' table should contain metrics like 'EqualOpportunity' and 'PredictiveParity' for 'Female' and 'Other' groups, calculated against the 'Male' reference group. Given the introduced bias, the 'False Positive Rate' for the 'Female' group is expected to be different from the 'Male' group, which should be reflected in the metrics.