Introduction
The Inference API is a FastAPI-based service that provides credit risk prediction capabilities using a deep learning model. The API predicts whether a customer’s credit risk is “good” or “bad” based on various customer attributes.Base URL
API Information
Credit Score Prediction API
1.0.0
API to predict the credit risk of a customer using a deep learning model
Authentication
Currently, the API does not require authentication. It is configured with CORS middleware to allow requests from any origin (*).
CORS Configuration
The API is configured with the following CORS settings:- Allow Origins:
*(all origins) - Allow Credentials:
true - Allow Methods:
*(all methods) - Allow Headers:
*(all headers)
Architecture
The API uses a singleton pattern for model inference, ensuring efficient resource usage:- Model Loading: The deep learning model is loaded once at startup from
model_weights_001.pth - Preprocessor: A scikit-learn preprocessor (
preprocessor.joblib) handles feature engineering - Configuration: Model architecture is defined in
model_config_001.yaml - Inference Engine: PyTorch-based neural network runs on CPU for predictions
Interactive Documentation
FastAPI automatically generates interactive API documentation:- Swagger UI: Available at
/docs - ReDoc: Available at
/redoc - Root Endpoint: Redirects to
/docs
Client Examples
Error Handling
The API returns standard HTTP status codes:- 200: Successful prediction
- 422: Validation error (invalid input data)
- 500: Internal server error during inference
Running the Server
Start the API server using uvicorn:Next Steps
Credit Score Prediction
Learn about the prediction endpoint
API Schema
View complete request/response schemas
