NOTE:
This document is maintained at:
In this project, we adapt, deploy, and benchmark an AI service using Google’s Cloud Functions [9], a function-as-a-service (FaaS) platform, and compare the performance to benchmarks conducted in [1] for the same service hosted on cloud virtual machines, IoT devices, and locally hosted containers. In recent work, we use the Generalized AI Service (GAS) Generator to autogenerate and deploy RESTful AI services using the Cloudmesh and Cloudmesh-openapi utilities [1] [2] [3]. The GAS generator provides AI domain experts a simple interface to share AI services while automating infrastructure deployment and service hosting with easy-to-use command-line interfaces provided by Cloudmesh and Cloudmesh-openapi. The example service, EigenfacesSVM [10], is a facial recognition example, taken from Scikit-learn and modified to be an AI service [2]. This project compares benchmark results of the service deployed in a FaaS model to the serverful paradigms tested in the original work and explains the development and deployment differences between the GAS generator and the FaaS platform.
In [1] we adapted the EigenfacesSVM model from Scikit-learn to be a set of four Python functions that: download image data from a remote repository, trains the AI model, uploads an image for prediction, and predicts the label for the uploaded image. The GAS generator uses Cloudmesh-openapi to automatically translate and deploy these Python functions as a RESTful service. The service is developed once and deployed to any target platform that Cloudmesh-openapi supports. For example, we demonstrated a multi-cloud service in which we deployed and benchmarked the AI service on three clouds simultaneously using Cloudmesh and Cloudmesh-opeanpi. We benchmarked the AI functions on cloud virtual machines from AWS, Google, and Azure, as well as Raspberry Pi platforms, a Mac book, and a Docker container run on that MacBook. It is important to note that these services were deployed in a serverful manner, where the hosting platform is continuously running the service. In contrast, this project aims to develop the same service in a serverless manner, where each invocation of the service is potentially carried out by different instance.
Cloud functions are a part of the serverless computing model in which cloud provider's offer managed and autoscaling execution environments for customers to deploy their code on [6]. By providing managed infrastructure, cloud providers reduce the demands on developers to deploy and maintain infrastructure. They provide extra fine grained billing where customers are only charged for the execution of the function [4]. This contrasts traditional VM pricing where customers are charged per hour the machine is operating regardless if the hosted service is being used or not. Providers accomplish this by standing-up and deploying customer code into a lightweight container on demand. Each deployed container is called an instance, and the cloud provider can scale the number of instances running based on the observed demand. Because cloud function instances are ephemeral, cloud functions are best suited for stateless and idempotent operations. If state needs to be saved or shared between instances, then the instances will have to interface through a storage solution such as cloud object storage [8]. Additionally, cloud functions are not directly addressable, so a client cannot attempt to communicate with a specific instance.
In this section wed discuss the two different deployment models used for the EigenfacesSVM AI service. The first is a more traditional serverful deployment method utilizing Cloudmesh-openapi to generate and host the AI service on a target platform. The second is a function-as-a-service deployment which is the development focus of this project.
In Figure 1 we show an AI service consisting of four functions. These functions are generated and hosted on any platform that Cloudmesh-openapi supports. In this example, we show Cloudmesh-openapi running on a traditional cloud virtual machine. The remote client interacts with the four functions that are exposed as RESTful services. All application state is stored on the local platform and each function invocation is handled by the same server.
Figure 1: A client running an AI service workflow, generated and hosted by Cloudmesh OpenAPI, on a cloud provider virtual machine. Requests for each function invocation are made using standard HTTP request methods including function arguments [1].
In figure 2 we show the same AI service running using a FaaS platform. The cloud provider implements each function as a container. These functions are ephemeral, and not directly addressable, so they require an external storage platform to share state. The remote client interacts with the four functions using RESTful services. When a remote client requests a function, the cloud provider starts a container to host the function instance and initializes the execution environment before running the function. In some cases recently used instances can be reused if they have not yet been shutdown by the autoscaling algorithm. The cloud provider automatically creates new function instances to meet demand. Unlike a serverful hosting model, a remote client will interact with several different containers during the AI service workflow.
Figure 2: A client running an AI service workflow, hosted as a FaaS.
In this section I will discuss advantages and disadvantages of developing and deploying the AI service with both the GAS generator and FaaS.
Because the cloud functions are stateless, external storage solutions, such as cloud object storage, are required to store and share state across functions. Because each cloud provider offers their own flavor of FaaS and storage solutions, choosing to use a FaaS model for a stateful application limits portability of the service code to other platforms. In the EigenfacesSVM example, stateful data included the training data, the model after being trained, and images uploaded for label prediction. Each of these objects is stored in Google Cloud Storage so the dependent functions can download them when invoked. When trying to port this code to another platform, the developer would need to learn and reimplement that platform's specific storage API, or pay the higher cost (monetary and network latency) to continue using storage services from an external separate provider.
In contrast to cloud functions, GAS generator can support a more traditional serverful model where state is stored on the local OS file system or a locally deployed database. With the GAS generator, service code can be written once and deployed on any platform that supports cloudmesh-openapi and the service's dependencies. It takes a similar amount of time to develop an AI service using a FaaS model as it does to use the GAS generator to develop and deploy the same AI service to a wide range of supported platforms. This provides the developer flexibility to migrate their service to an appropriate platform as needed, where FaaS, without extra effort, limits the use case to one particular cloud provider.
FaaS platforms also come with limited resources compared to the wide array of platforms supported by CLoudmesh-Openapi. Google Cloud functions currently limits developers to 2048MB of memory, a 2.4 Ghz equivalent processor, and a maximum of 540 seconds for function runtime [4]. While this was suitable for our example, it will severely limit the amount ability of FaaS to be used for more expansive AI models. Google has advertised an increase to 4096MB of memory with 4.8 Ghz equivalent process [4], but at the time of writing we were unable to successfully deploy a function to those target resources.
GAS generator, on the other hand, provides access to a wide range of servers (Windows, MacOS, Linux), IoT (Rasbian OS), and container platforms (Docker, Kubernetes) which will allow AI developers to target a platform best suited for their needs.
Developing an AI service using cloud functions comes with some prerequisite knowledge including: utilizing specific REST frameworks (like Flask's request objects), utilizing specific storage APIs (like google.cloud.storage), and, if desired, HTML and other GUI presentation languages.
In contrast, GAS significantly reduces these specific knowledge requirements. As previously discussed it supports serverful deployment methods that do not require external storage services. It uses OpenAPI to automatically generate a self documenting API and web application presentation of the service hosted by a Flask web server. The developer simply provides the python function code, and the GAS generator turns it into a web app. This significantly increase the ability of AI domain experts to share their work with minimal effort. See Figure 9 in the Appendix for an example of the auto-generated web GUI. In a FaaS deployment, the developer would be on the hook to make their own. For this project I did not implement a GUI.
Developing and debugging a cloud function can be difficult because the function has to go through a time consuming deployment process before it can be accessed and logs checked for errors and output. Google Cloud Functions does provide instructions on setting up a local development environment, but this is a more complicated development environment setup than that provided by Cloudmesh-openapi [5]. In contrast GAS generates services can be locally developed directly on the same platform they will be hosted on.
A main advantage for FaaS is the pricing scheme. The AI service is only charged for the runtime of the function and the long-term storage of any data. This provides domain experts a cost efficient way to share their service, particularly if it is infrequently used. GAS generated services provide pricing flexibility by targeting multiple platforms such as cloud virtual machines and low cost IoT devices.
FaaS can autoscale based on observed usage. GAS generated services require the developer to scale the service with further infrastructure deployments.
Because cloud functions are hosted on cloud provider managed servers, the developer does not need to concern themselves deploying and running infrastructure. GAS generated services leaves it to the developer to ensure the platform is managed and secured.
Because FaaS frameworks are developed and managed by commercial organizations, their code has the potential upside of being more stable and reliable for long-term use.
We benchmark three functions of the EigenfacesSVM service deployed using FaaS and compare it to the benchmarks from [1].We measure three function runtimes:
- Train measures the runtime to train the EigenfacesSVM model and store it in cloud storage for future use
- Upload measures the runtime to upload an image and store it in cloud storage for future use
- Predict measure the runtime to download an image from cloud storage, load the AI model, and run a prediction on the image
We measure these functions from two different perspectives:
- Client This is the function runtime from the remote client
- Server This is the function runtime as measured directly on the server directly within the function
We measure runtimes using the cloudmesh.common.Benchmark utility. In the case of client measurement, we can measure this in our test python program. In the case of server measurement, we run the benchmark local within the function on the server, and return its results in the HTTP response. We expect the client runtime to be slower than the server runtime to account for both network round-trip-times, and the amount of time it takes to prepare an instance for function execution.
Because cloud functions are ephemeral we conduct two tests. One in which the majority of instances are cold started, and a second where warm-start instances are already running. We constructed this test by first deploying a new function, ensuring there was only one instance running, and then conducting 30 requests in parallel. The remaining 29 requests will incur a cold-start situation. Immediately following the completion of the first test we run an additional 30 requests to try and capture warm-start instances. Thus, the are two conditions cloud function instances are captured in:
- Cold-start A maximum of 1 instance is running before 30 parallel requests
- Warm-start This test of 30 parallel requests runs immediately upon the completion of the cold-start test
From the perspective of the remote client, we expect the runtime of the cold-start functions to be significantly longer because the cloud provider must first prepare a container and initialize the function environment before it can be run. We expect warm-start function invocations to be significantly faster. From the server perspective, we expect the cold-start and warm-start function times to be similar, as the timer is not running during instance setup.
Finally, we measure two separate cloud function sizes to see if an increase in resources improves performance. Google cloud functions has set resource configurations [4]. We determined we had a minimum memory requirement of 1GB, so were only able to test 1GB and 2GB variations. 4GB variations are advertised, but we were not successfully able to deploy to that target configuration on the us-east1 region at time of writing. Thus there are two resource variants:
- 1gb Provides 1024MB of memory and a 1.4GHz processor
- 2gb Provides 2048MB of memory and a 2.4GHz processor
We expect functions to run faster on the variant with greater resources. Interestingly, we identify that these resources are similar in quantity to those used by Raspberry Pi's from [1], and we are curious to see how their performance measures up.
In Figure 3 we show the runtime of the train function under the various cloud function conditions. The bars show the average of 30 trials, and the error bars show the standard deviation of the 30 trials. As expected, cold-start functions are significantly slower than warm-start functions. However, the long runtime of this function reduces amortizes this cold-start cost better than short running functions. Surprisingly, we do not see a distinguishable improvement on functions allocated more resources. We were hoping to deploy additional resource configurations to further study this, but these two configurations were the only two with sufficient resources at the time of the study.
Figure 3: Train function runtime for cloud function with various conditions.
In Figure 4 we show the runtime of the upload function under the various cloud function conditions. The bars show the average of 30 trials, and the error bars show the standard deviation of the 30 trials. As expected, cold-start functions are significantly slower than warm-start functions. From the client perspective, the cold-start and warm-start difference is especially large, as it comprises a significant portion of the overall function runtime. Surprisingly, there is a large difference between the client warm start runtime and the server runtimes. This implies there is additional delay in handling requests besides environment setup, or our experiment did not achieve a perfect warm-instance hit rate. Unlike in Figure 3, in this experiment we do see an identifiable decreases in runtime that comes with increased resources.
Figure 4: Upload function runtime for cloud function with various conditions.
In Figure 5 we show the runtime of the upload function under the various cloud function conditions. The bars show the average of 30 trials, and the error bars show the standard deviation of the 30 trials. This figure provides us further confirmation of the observations made in the discussion of Figure 4.
Figure 5: Predict function runtime for cloud function with various conditions.
In Figure 6 we show the runtime of the train function compared to the results from [1]. Machine specifications from [1] are shown in Table 2. The bars show the average of the trials, and the error bars show the standard deviation of the trials. We only show the server observed runtimes as client side measurements were not measured in [1]. As predicted, the performance was in the range of the two Raspberry Pi models, and traditional virtual machines significantly outperform the FaaS functions. Considering we used the highest working resource configuration available, it is surprising that Cloud Functions have such low performance. Higher resource limits will be required from Google Cloud Functions before larger AI services could consider it a possible deployment target.
Figure 6: Server-side train function runtime for cloud function compared with other platforms.
In Figure 7 we show the runtime of the upload function compared to results from [1]. The bars show the average the trials, and the error bars show the standard deviation of the trials. In [1] the cloud VMs were the only remotely deployed services, so from a client perspective, we can only compare the FaaS to the cloud VMs. In [1] we identified that the network round-trip-time was the dominate component of the function runtime. Here, FaaS function startup time dominates. We observe that the FaaS functions perform significantly worse despite having similar network round-trip-times, as all clouds and FaaS functions were deployed to east cost data centers, and access from the same remote network. This graph shows that warm-start functions are not a significant enough improvement for AI services that demand low latency responses.
Figure 7: Client-side upload function runtime for cloud function compared with other platforms.
In Figure 8 we show the runtime of the upload function compared to results from [1]. The bars show the average the trials, and the error bars show the standard deviation of the trials. This figure provides further confirmation of the observations made in the discussion of Figure 7.
Figure 8: Client-side predict function runtime for cloud function compared with other platforms.
Table 1: Complete test measurements.
size | party | type | test | mean | min | max | std |
---|---|---|---|---|---|---|---|
1gb | client | cold | predict | 7.27 | 2.14 | 9.12 | 1.44 |
1gb | client | warm | predict | 3.92 | 0.64 | 6.16 | 1.6 |
1gb | server | cold | predict | 0.7 | 0.52 | 0.92 | 0.1 |
1gb | server | warm | predict | 0.57 | 0.34 | 1.46 | 0.19 |
2gb | client | cold | predict | 7.08 | 1.18 | 8.09 | 1.31 |
2gb | client | warm | predict | 3.64 | 0.48 | 5.46 | 1.75 |
2gb | server | cold | predict | 0.63 | 0.55 | 0.75 | 0.05 |
2gb | server | warm | predict | 0.55 | 0.27 | 0.7 | 0.12 |
aws | client | predict | 0.4 | 0.26 | 0.8 | 0.18 | |
azure | client | predict | 0.36 | 0.24 | 0.6 | 0.13 | |
client | predict | 0.36 | 0.27 | 0.82 | 0.16 | ||
1gb | client | cold | train | 129.38 | 112.51 | 178.07 | 16 |
1gb | client | warm | train | 123.23 | 94.06 | 183.9 | 15.37 |
1gb | server | cold | train | 123.93 | 107.72 | 171.5 | 16.22 |
1gb | server | warm | train | 119.23 | 93.67 | 179.99 | 15.06 |
2gb | client | cold | train | 131.19 | 113.92 | 171.67 | 12.56 |
2gb | client | warm | train | 118.33 | 61.43 | 138.82 | 16.65 |
2gb | server | cold | train | 125.74 | 110.26 | 164.44 | 11.85 |
2gb | server | warm | train | 114.8 | 61.22 | 135.2 | 15.83 |
aws | server | train | 35.72 | 34.91 | 46.5 | 1.73 | |
azure | server | train | 40.28 | 35.3 | 47.5 | 3.32 | |
docker | server | train | 54.72 | 54.72 | 54.72 | 0 | |
server | train | 42.04 | 41.52 | 45.93 | 0.71 | ||
mac book | server | train | 33.82 | 33.82 | 33.82 | 0 | |
pi 3b+ | server | train | 222.61 | 208.56 | 233.48 | 8.4 | |
pi 4 | server | train | 88.59 | 87.83 | 89.35 | 0.32 | |
1gb | client | cold | upload | 5.97 | 1.42 | 7.67 | 1.24 |
1gb | client | warm | upload | 4.18 | 0.34 | 7.05 | 2.1 |
1gb | server | cold | upload | 0.2 | 0.14 | 0.42 | 0.06 |
1gb | server | warm | upload | 0.17 | 0.09 | 0.31 | 0.05 |
2gb | client | cold | upload | 4.96 | 0.9 | 5.97 | 0.85 |
2gb | client | warm | upload | 2.93 | 0.31 | 4.97 | 1.82 |
2gb | server | cold | upload | 0.17 | 0.13 | 0.23 | 0.03 |
2gb | server | warm | upload | 0.13 | 0.08 | 0.16 | 0.03 |
aws | client | upload | 0.43 | 0.16 | 1.13 | 0.21 | |
azure | client | upload | 0.32 | 0.15 | 0.5 | 0.15 | |
client | upload | 0.31 | 0.18 | 0.73 | 0.18 |
This work focuses on generating benchmark results to compare to [1], thus it does not yet implement a fully generalized EigenfacesSVM service. There are some features of the functions that need to be completed for a more generalized service that does more than the Scikit-learn example. As it stands the functions operate on one specific data set for the download function, and one specific image for the predict function. The upload function can upload arbitrary images. Extending the code to be a full service will require additional argument passing and processing and for the predict and download function. Most of the logic is present to finish these features, but a complete implementation is not the focus of this work. This limits do not detract from the benchmark validity.
Our warm-start experiment is designed such that recently used containers are available, and while our results show this was the case, we do not explicitly measure what percentage of warm-start instances are used. Using global instance state, one can set a flag denoting whether an instance has been previously used. Measuring what percent of requests can find this flag may be a good opportunity for better warm-start measurements.
A full cost analysis is not presented to identify the true cost efficiency that FaaS model may afford. We identified this is not trivial to measure as cost incurs both storage usage and function invocations which are priced and billed separately [4][7]. Pricing storage further separates data-at-rest charges and data network egress charges. For a true cost analysis, a robust set of use cases including: amount of data, length of data storage, number of function invocations, and regional distribution of services need be created and compared to a similar serverful deployment. This is outside the scale of this work.
In this project we deploy and benchmark an AI service using the Google Cloud Functions function-as-a-service platform. We study this with the intent to identify if FaaS is a viable and easy-to-use model for AI domain experts to develop and share AI services. We demonstrate that FaaS has the benefit of per-function call billing, autoscaling, and managed infrastructure, but that it is limited in: performance, response latency, vendor specific development, less-accessible local development and testing, platform flexibility, and that it requires uneccessary pre-requisite knowledge that should be automated away. We compare this to our previous work with the Generalized AI Service (GAS) Generator and show GAS generator overcomes many of these limitations.
We like to thank Gregor von Laszesski, Richard Otten, Reilly Markowitz, Sunny Gandhi, Adam Chai, Caleb Wilson, Geoffry C. Fox, and Wo L. Chang for the AI service generation and benchmarking that this work is based on.
[1] "Using GAS for Speedy Generation of Hybrid Multi-Cloud AutoGenerated AI Services." Gregor von Laszewski, Anthony Orlowski, Richard Otten, Reilly Markowitz, ,Sunny Gandhi, Adam Chai, Caleb Wilson, Feoffry C. Fox, and Wo L. Chang.https://raw.githubusercontent.com/laszewski/laszewski.github.io/master/papers/vonLaszewski-openapi.pdf
[2][cloudmesh-openapi] Gregor von Laszewski. 2020. Cloudmesh OpenAPI Repository for automaticallygenerated REST services from Python functions. Web Page. https://github.com/cloudmesh/cloudmesh-openapi
[3][cloudmesh] Gregor von Laszewski. 2020. Cloudmesh Repositories. Github. https://github.com/cloudmesh
[4] "Pricing | Cloud Functions Documentation | Google Cloud." Google Cloud, 23 Nov. 2020, https://cloud.google.com/functions/pricing.
[5] "Local Development | Cloud Functions Documentation | Google Cloud." Google Cloud, 1 Dec. 2020, https://cloud.google.com/functions/docs/running/overview.
[6] "Cloud Functions Execution Environment | Cloud Functions Documentation." Google Cloud, 8 Dec. 2020, https://cloud.google.com/functions/docs/concepts/exec.
[7] "Cloud Storage pricing | Google Cloud." Google Cloud, 25 Nov. 2020, https://cloud.google.com/storage/pricing#price-tables.
[8] "Tips & Tricks | Cloud Functions Documentation | Google Cloud." Google Cloud, 11 Dec. 2020, https://cloud.google.com/functions/docs/bestpractices/tips.
[9] "How-to Guides | Cloud Functions Documentation | Google Cloud." 30 Nov. 2020, https://cloud.google.com/functions/docs/how-to.
[10] "Faces recognition example using eigenfaces and SVMs — scikit-learn 0.23.2 documentation." 11 Dec. 2020, https://scikit-learn.org/stable/auto_examples/applications/plot_face_recognition.html.
Prerequisite: gcloud command-line tool
Note, here we are specifying memory, region, and timeout limits.
cd ~/PycharmProjects/ef-faas/service
gcloud functions deploy eigenfaces_download_data_http --set-env-vars USER=benchmark --runtime python38 --trigger-http --allow-unauthenticated --memory=1024MB --timeout=540s --region=us-east1
gcloud functions deploy eigenfaces_train_http --set-env-vars USER=benchmark --runtime python38 --trigger-http --allow-unauthenticated --memory=1024MB --timeout=540s --region=us-east1
gcloud functions deploy eigenfaces_upload_http --set-env-vars USER=benchmark --runtime python38 --trigger-http --allow-unauthenticated --memory=1024MB --timeout=540s --region=us-east1
gcloud functions deploy eigenfaces_predict_http --set-env-vars USER=benchmark --runtime python38 --trigger-http --allow-unauthenticated --memory=1024MB --timeout=540s --region=us-east1
Here we tell the AI service to download remote image data and store it in cloud storage. An ACK and a benchmark are returned.
curl https://us-east1-anthony-orlowski.cloudfunctions.net/eigenfaces_download_data_http?id=1
Data downloaded as lfw-funneled.tgz
+---------------------+------------------------------------------------------------------+
| Attribute | Value |
|---------------------+------------------------------------------------------------------|
| BUG_REPORT_URL | "https://bugs.launchpad.net/ubuntu/" |
| DISTRIB_CODENAME | bionic |
| DISTRIB_DESCRIPTION | "Ubuntu 18.04.5 LTS" |
| DISTRIB_ID | Ubuntu |
| DISTRIB_RELEASE | 18.04 |
| HOME_URL | "https://www.ubuntu.com/" |
| ID | ubuntu |
| ID_LIKE | debian |
| NAME | "Ubuntu" |
| PRETTY_NAME | "Ubuntu 18.04.5 LTS" |
| PRIVACY_POLICY_URL | "https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" |
| SUPPORT_URL | "https://help.ubuntu.com/" |
| UBUNTU_CODENAME | bionic |
| VERSION | "18.04.5 LTS (Bionic Beaver)" |
| VERSION_CODENAME | bionic |
| VERSION_ID | "18.04" |
| cpu_count | 2 |
| mem.active | 436.4 MiB |
| mem.available | 1.3 GiB |
| mem.free | 1.3 GiB |
| mem.inactive | 18.5 MiB |
| mem.percent | 36.3 % |
| mem.total | 2.0 GiB |
| mem.used | 439.5 MiB |
| platform.version | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
| python | 3.8.5 (default, Sep 14 2020, 07:13:57) |
| | [GCC 7.5.0] |
| python.pip | 20.1.1 |
| python.version | 3.8.5 |
| sys.platform | linux |
| uname.machine | x86_64 |
| uname.node | localhost |
| uname.processor | x86_64 |
| uname.release | 4.4.0 |
| uname.system | Linux |
| uname.version | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
| user | benchmark |
+---------------------+------------------------------------------------------------------+
+------------------------------------+----------+--------+--------+---------------------+-------+-----------+-----------+-------+-------------------------------------+
| Name | Status | Time | Sum | Start | tag | Node | User | OS | Version |
|------------------------------------+----------+--------+--------+---------------------+-------+-----------+-----------+-------+-------------------------------------|
| main/eigenfaces_download_data_http | ok | 97.143 | 97.143 | 2020-12-11 23:00:53 | | localhost | benchmark | Linux | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
+------------------------------------+----------+--------+--------+---------------------+-------+-----------+-----------+-------+-------------------------------------+
# csv,timer,status,time,sum,start,tag,uname.node,user,uname.system,platform.version
# csv,main/eigenfaces_download_data_http,ok,97.143,97.143,2020-12-11 23:00:53,,localhost,benchmark,Linux,#1 SMP Sun Jan 10 15:06:54 PST 2016
Here we tell the AI server to fetch the training data, train a model, and store it in cloud storage. Model training results and a benchmark are returned.
curl https://us-east1-anthony-orlowski.cloudfunctions.net/eigenfaces_train_http?id=1
Total dataset size:
n_samples: 1288
n_features: 1850
n_classes: 7
Extracting the top 150 eigenfaces from 966 faces
done in 0.591s
Projecting the input data on the eigenfaces orthonormal basis
done in 0.084s
Fitting the classifier to the training set
done in 61.893s
Best estimator found by grid search:
SVC(C=1000.0, class_weight='balanced', gamma=0.005)
Predicting people's names on the test set
done in 0.098s
precision recall f1-score support
Ariel Sharon 0.67 0.46 0.55 13
Colin Powell 0.81 0.87 0.84 60
Donald Rumsfeld 0.94 0.63 0.76 27
George W Bush 0.81 0.98 0.89 146
Gerhard Schroeder 0.95 0.80 0.87 25
Hugo Chavez 1.00 0.47 0.64 15
Tony Blair 1.00 0.75 0.86 36
accuracy 0.84 322
macro avg 0.88 0.71 0.77 322
weighted avg 0.86 0.84 0.84 322
[[ 6 2 0 5 0 0 0]
[ 1 52 0 7 0 0 0]
[ 1 2 17 7 0 0 0]
[ 0 3 0 143 0 0 0]
[ 0 1 0 4 20 0 0]
[ 0 3 0 4 1 7 0]
[ 1 1 1 6 0 0 27]]
+---------------------+------------------------------------------------------------------+
| Attribute | Value |
|---------------------+------------------------------------------------------------------|
| BUG_REPORT_URL | "https://bugs.launchpad.net/ubuntu/" |
| DISTRIB_CODENAME | bionic |
| DISTRIB_DESCRIPTION | "Ubuntu 18.04.5 LTS" |
| DISTRIB_ID | Ubuntu |
| DISTRIB_RELEASE | 18.04 |
| HOME_URL | "https://www.ubuntu.com/" |
| ID | ubuntu |
| ID_LIKE | debian |
| NAME | "Ubuntu" |
| PRETTY_NAME | "Ubuntu 18.04.5 LTS" |
| PRIVACY_POLICY_URL | "https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" |
| SUPPORT_URL | "https://help.ubuntu.com/" |
| UBUNTU_CODENAME | bionic |
| VERSION | "18.04.5 LTS (Bionic Beaver)" |
| VERSION_CODENAME | bionic |
| VERSION_ID | "18.04" |
| cpu_count | 2 |
| mem.active | 736.4 MiB |
| mem.available | 1.3 GiB |
| mem.free | 1.3 GiB |
| mem.inactive | 21.0 MiB |
| mem.percent | 37.0 % |
| mem.total | 2.0 GiB |
| mem.used | 179.6 MiB |
| platform.version | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
| python | 3.8.5 (default, Sep 14 2020, 07:13:57) |
| | [GCC 7.5.0] |
| python.pip | 20.1.1 |
| python.version | 3.8.5 |
| sys.platform | linux |
| uname.machine | x86_64 |
| uname.node | localhost |
| uname.processor | x86_64 |
| uname.release | 4.4.0 |
| uname.system | Linux |
| uname.version | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
| user | benchmark |
+---------------------+------------------------------------------------------------------+
+----------------------------+----------+---------+---------+---------------------+-------+-----------+-----------+-------+-------------------------------------+
| Name | Status | Time | Sum | Start | tag | Node | User | OS | Version |
|----------------------------+----------+---------+---------+---------------------+-------+-----------+-----------+-------+-------------------------------------|
| main/eigenfaces_train_http | ok | 153.425 | 153.425 | 2020-12-11 23:04:58 | | localhost | benchmark | Linux | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
+----------------------------+----------+---------+---------+---------------------+-------+-----------+-----------+-------+-------------------------------------+
# csv,timer,status,time,sum,start,tag,uname.node,user,uname.system,platform.version
# csv,main/eigenfaces_train_http,ok,153.425,153.425,2020-12-11 23:04:58,,localhost,benchmark,Linux,#1 SMP Sun Jan 10 15:06:54 PST 2016
Here we upload an image, example_image.jpg, from our local dir. It is stored in cloud storage. An ACK and benchmark are returned.
curl -F example_image.jpg=@example_image.jpg https://us-east1-anthony-orlowski.cloudfunctions.net/eigenfaces_upload_http?id=1
File 1example_image.jpg uploaded.
+---------------------+------------------------------------------------------------------+
| Attribute | Value |
|---------------------+------------------------------------------------------------------|
| BUG_REPORT_URL | "https://bugs.launchpad.net/ubuntu/" |
| DISTRIB_CODENAME | bionic |
| DISTRIB_DESCRIPTION | "Ubuntu 18.04.5 LTS" |
| DISTRIB_ID | Ubuntu |
| DISTRIB_RELEASE | 18.04 |
| HOME_URL | "https://www.ubuntu.com/" |
| ID | ubuntu |
| ID_LIKE | debian |
| NAME | "Ubuntu" |
| PRETTY_NAME | "Ubuntu 18.04.5 LTS" |
| PRIVACY_POLICY_URL | "https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" |
| SUPPORT_URL | "https://help.ubuntu.com/" |
| UBUNTU_CODENAME | bionic |
| VERSION | "18.04.5 LTS (Bionic Beaver)" |
| VERSION_CODENAME | bionic |
| VERSION_ID | "18.04" |
| cpu_count | 2 |
| mem.active | 150.3 MiB |
| mem.available | 1.8 GiB |
| mem.free | 1.8 GiB |
| mem.inactive | 18.5 MiB |
| mem.percent | 8.2 % |
| mem.total | 2.0 GiB |
| mem.used | 131.8 MiB |
| platform.version | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
| python | 3.8.5 (default, Sep 14 2020, 07:13:57) |
| | [GCC 7.5.0] |
| python.pip | 20.1.1 |
| python.version | 3.8.5 |
| sys.platform | linux |
| uname.machine | x86_64 |
| uname.node | localhost |
| uname.processor | x86_64 |
| uname.release | 4.4.0 |
| uname.system | Linux |
| uname.version | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
| user | benchmark |
+---------------------+------------------------------------------------------------------+
+-----------------------------+----------+--------+-------+---------------------+-------+-----------+-----------+-------+-------------------------------------+
| Name | Status | Time | Sum | Start | tag | Node | User | OS | Version |
|-----------------------------+----------+--------+-------+---------------------+-------+-----------+-----------+-------+-------------------------------------|
| main/eigenfaces_upload_http | ok | 0.3 | 0.3 | 2020-12-11 23:06:16 | | localhost | benchmark | Linux | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
+-----------------------------+----------+--------+-------+---------------------+-------+-----------+-----------+-------+-------------------------------------+
# csv,timer,status,time,sum,start,tag,uname.node,user,uname.system,platform.version
# csv,main/eigenfaces_upload_http,ok,0.3,0.3,2020-12-11 23:06:16,,localhost,benchmark,Linux,#1 SMP Sun Jan 10 15:06:54 PST 2016
Here we run the predict function (image is hardcoded at the moment, but easy to implment). The predicted label for the image ('George W Bush') and a benchmark are returned.
curl https://us-east1-anthony-orlowski.cloudfunctions.net/eigenfaces_predict_http?id=1
['George W Bush']
+---------------------+------------------------------------------------------------------+
| Attribute | Value |
|---------------------+------------------------------------------------------------------|
| BUG_REPORT_URL | "https://bugs.launchpad.net/ubuntu/" |
| DISTRIB_CODENAME | bionic |
| DISTRIB_DESCRIPTION | "Ubuntu 18.04.5 LTS" |
| DISTRIB_ID | Ubuntu |
| DISTRIB_RELEASE | 18.04 |
| HOME_URL | "https://www.ubuntu.com/" |
| ID | ubuntu |
| ID_LIKE | debian |
| NAME | "Ubuntu" |
| PRETTY_NAME | "Ubuntu 18.04.5 LTS" |
| PRIVACY_POLICY_URL | "https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" |
| SUPPORT_URL | "https://help.ubuntu.com/" |
| UBUNTU_CODENAME | bionic |
| VERSION | "18.04.5 LTS (Bionic Beaver)" |
| VERSION_CODENAME | bionic |
| VERSION_ID | "18.04" |
| cpu_count | 2 |
| mem.active | 158.8 MiB |
| mem.available | 1.8 GiB |
| mem.free | 1.8 GiB |
| mem.inactive | 19.7 MiB |
| mem.percent | 8.7 % |
| mem.total | 2.0 GiB |
| mem.used | 139.0 MiB |
| platform.version | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
| python | 3.8.5 (default, Sep 14 2020, 07:13:57) |
| | [GCC 7.5.0] |
| python.pip | 20.1.1 |
| python.version | 3.8.5 |
| sys.platform | linux |
| uname.machine | x86_64 |
| uname.node | localhost |
| uname.processor | x86_64 |
| uname.release | 4.4.0 |
| uname.system | Linux |
| uname.version | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
| user | benchmark |
+---------------------+------------------------------------------------------------------+
+------------------------------+----------+--------+-------+---------------------+-------+-----------+-----------+-------+-------------------------------------+
| Name | Status | Time | Sum | Start | tag | Node | User | OS | Version |
|------------------------------+----------+--------+-------+---------------------+-------+-----------+-----------+-------+-------------------------------------|
| main/eigenfaces_predict_http | ok | 0.912 | 0.912 | 2020-12-11 23:06:50 | | localhost | benchmark | Linux | #1 SMP Sun Jan 10 15:06:54 PST 2016 |
+------------------------------+----------+--------+-------+---------------------+-------+-----------+-----------+-------+-------------------------------------+
# csv,timer,status,time,sum,start,tag,uname.node,user,uname.system,platform.version
# csv,main/eigenfaces_predict_http,ok,0.912,0.912,2020-12-11 23:06:50,,localhost,benchmark,Linux,#1 SMP Sun Jan 10 15:06:54 PST 2016
gcloud functions describe eigenfaces_download_data_http
gcloud functions delete eigenfaces_download_data_http
Benchmark output is stored in script output.
python test.py 1gb
python graph.py
Figure 9: A screenshot of the EigenfacesSVM auto-generated GUI [1].