Develop a web service (ML api) can predict the price of house based on the Ames housing dataset. The web serice expose one end point that can take numerical input (79 variables json) and return prediction as output
Try the service!
# the service may not be accessible at the moment, since I closed down the AWS instances, stay tuned for the update!
$ curl 35.165.231.104:8000
$ curl http://35.165.231.104:8000/REST/api/v1.0/train
$ curl http://35.165.231.104:8000/REST/api/v1.0/model_list
$ curl -i -H "Content-Type: application/json" -X POST -d $(python script/get_test_json.py) http://35.165.231.104:8000/REST/api/v1.0/predict_with_input
Flask
as ML api serverDocker hub
as service Docker repositoryAWS ECS
as container service run ML api via DockerAWS Elastic Load Balancer
automatically distributes incoming application traffic across multiple targetsAWS S3
as space storage models, ML output, and logsdevelop a dockerized ML API via flask and deploy the same API hundreds even millions times on the cloud
. The usage of AWS Elastic Load Balancer
(ELB) is for dealing with above scalability, the ELB will dispense heavy API requests to workers running on the ECS for returning the ML predicitons on time. Usage of AWS S3
as space saving models/outputs with versions. Can send the log to the AWS cloudwatch
for the service dashboard. Can run the API on the AWS Fargate
for its serverless advantage (quick develop, no ec2 managment costs) as well.Local dev -> Local train -> Unit-test -> Docker build -> Travis (CI/CD) -> Deploy to Dockerhub -> Deploy to AWS ECS -> Online train -> API ready
├── Dockerfile : Dockerfile build web service (ML api)
├── Predict : Main class for ML prediction
├── api : API runner (flask web server)
├── data : Train, and test data
├── log : Service log file
├── model : File storage trained models
├── output : File storage ML prediction output
├── requirements.txt : Python dependency
├── script : Helper scripts (parse json, upload files..)
└── tests : Unit-test scripts
└── utils : utils class for file/S3 IO..