Introduction to On-Premises Serverless Function as a Service with OpenFaaS and Go — Part 1
The Function as a Service (FaaS) paradigm allows developers to focus on code with a reduction in concern for infrastructure. As a result apps that can specifically be deployed as FaaS or utilize elements of FaaS can typically attain faster speed to market than alternative methods of representing and encapsulating callable functionality.
In this Part 1 of a series of articles aiming to outline OpenFaaS functionality in depth we will look at the basic concepts of FaaS and OpenFaaS and stand-up a first lab environment to perform some basic proof of concept of the OpenFaaS Serverless platform.
In later articles we will look at using OpenFaaS to invoke custom Go functions, the utilization of Kafka to trigger OpenFaaS functions and the utilization of templates to further our practices for development.
In yet further articles we will look in depth at the scaling of OpenFaaS and its requisite components and ecosystem.
What is FaaS?
FaaS can be considered a sub-concept of the serverless concept whereby serverless can be considered a mode in which:
The developer experience (devx) requires the development of small units of code and low to no need to orchestrate infrastructure directly.
Either functions or microservices are developed.
Scaling and parallelism can be afforded just-in-time and facilitated by the infrastructural software.
The developers code is portable between different FaaS infrastructures.
In this case the concentration on function engenders a reduced burden on software management for the developer, a focus on smaller units of code that are more business value focused, a tendency towards being event driven or driven by an endpoint such as REST/HTTP and arguably improved understanding due to these factors and that function interfaces are isomorphic.
Serverless offerings are typically either offerings that are SaaS or can be deployed on-premise and typically to a Kubernetes cluster. The current CNCF vogue is to follow the best practices of what has become known as Serverless 2.0.
What is Serverless 2.0?
Serverless 2.0 begs the question as to what is/was Serverless 1.0.
Early, now retrospectively termed, Serverless 1.0 FaaS offerings supported by cloud vendors (such as AWS Lambda), typically faced a number of problems; the most salient of which included limits to the Developer experience; so, lack of standardization between vendors, portability barriers, flexibility and so on. A good reference of these problems can be found in [1]
In order to help address some of these concerns CNCF defined a simple contract for Serverless 2.0. The contract is simple and promotes portability, it stipulates that:
- OCI-compatible container image are used
- An HTTP server on port 8080 is used
- Configured by environment variables
What is OpenFaaS?
OpenFaas promotes the principle of run your code anywhere with the same unified experience. It supplies a function store and templating system that allows you to quickly stand up Serverless FaaS on-prem, on-cloud or even your development laptop. A connector-sdk is available for creating custom golang connectors.
Other solutions exist that are either installable on-prem or Cloud Native. Notable players include Google Cloud Functions and KNative.
Why Go?
Go has becoming associated with the moniker of being a cloud native programming language. It was originally designed by Google in 2007 to tackle idiomatic problems of developing and maintaining distributed software projects and evolving them over time. Arguably other languages such as Rust engender similar qualities; however, I selected go for this effort as I am currently using it extensively for k8s private cloud projects.
Getting Started
In this blog we will create a local lab cluster as a playbox to as the goal is merely to introduce the reader to a simple working implementation of OpenFaaS; though of course the deployment documentation for OpenFaaS can be followed to install to various environments including Google’sGKE and Microsoft’s AKS cloud based Kubernetes offerings.
For our introductory case we will assume the use has the capability to run a local Kind cluster.
First create a named cluster with kind
kind create cluster --name robtest
Once the cluster is running for expedience we can use Arkade (r) to install OpenFaaS. (add Arkade info).
So, next install Arkade:
curl -sLS https://get.arkade.dev | sudo sh
Within moments you should be able to inspect Arkade.
Note that running the following command on Arkade should give you an idea of the breadth of tooling that can be leveraged from this simple to use tool.
arkade install --help
Partial output is illustrated below.
With Arkade it is now possible to install a lab copy of OpenFaaS. Use the following command.
arkade install openfaas
Within a short order the following output will be generated:
Running;
kubectl get deploy -n openfaas
Will now show the following, indicating the OpenFaaS deployments are running.
We now need to install the client application for OpenFaaS; faas-cli.
This can be accomplished with the following command:
curl -SLsf https://cli.openfaas.com | sudo sh
Once faas-cli is installed we can confirm the rollout of OpenFaaS.
kubectl rollout status -n openfaas deploy/gateway
We can now also forward the port to the OpenFaaS gateway which we can use to deploy, manage and call our functions.
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
We can now obtain the admin password for the deployment.
get secret -n openfaas basic-auth -o jsonpath=”{.data.basic-auth-password}” | base64 — decode; echo)
We can now login
echo -n $PASSWORD | faas-cli login — username admin — password-stdin
The same client login information can be used to login to the OpenFaaS web UI.
Once logged in you should see something like this.
The OpenFaaS web UI can be used to deploy and invoke operations on functions, though for Part 1 of this series we will kick the tyres of our OpenFaaS deployment by using the faas-cli. In later articles we will progress to develop more advanced functions from OpenFaaS templates and move forward to develop a more complex event based function in the Go language.
To test our first function we can utilise the faas-cli to list functions that have been prefigured in our function store.
faas-cli store list -v
The output of preconfigured functions listed from the store is shown below.
For our test we will use the tesseract OCR function to see if we can successfully output the characters in an image passed via curl.
For a simple test we will see if the function can read some text from the abstract of the paper Understanding Real-World Concurrency Bugs in Go [2].
snippet_abstract.png
First we use faas-cli to deploy our serverless function to our OpenFaaS lab environment from the function store.
We run the following command:
faas-cli store deploy ‘Tesseract OCR’
If we now run list with faas-cli we should see that our OCR function is installed and ready.
faas-cli list
Note that we can see the kubernetes deployment for our OCR function with the following command.
kubectl describe deployments ocr -n openfaas-fn
Which produces the following output:
As a first stage we should base64 encode our image.
base64 snippet_abstract.png > snippet_abstract.b64
We now have a file called snippet_abstract.b64 that contains the base64 representation of our image of text.
Let’s call the OCR function on our private lab serverless FaaS platform!
For our first call we will use the faas-cli thus:
cat snippet_abstract.b64 | faas-cli invoke ocr
This produces the following output.
Excellent. That’s looking good. Now lets try directly calling the function via curl.
curl -sL -d @snippet_abstract.b64 http://127.0.0.1:8080/function/ocr -v
Here’s the output
Our OCR function is looking really cool and is certainly useful.
For measure we will demonstrate the asynchronous calling of our function using OpenFaaS functionality. Later articles will detail this mechanism further, though for experimentation you can follow along.
Open a new shell and start netcat listening on port 8889.
nc -l 8889
Next we reformulate the curl command to use X-Curl-Callback to point to our netcat listener and the async-functions branch of the OpenFaaS function URLs.
curl -sL -d @snippet_abstract.b64 -i http://127.0.0.1:8080/async-function/ocr -H “X-Callback-Url: http://192.168.0.157:8889/" -v
Here we call our OCR function asynchronously and because of this we need to pass the X-Callback-Url which is the IP Address of our lab host machine and our port 8889 to the netcat listener.
We obtain the following response.
Note the Status Code of 202. If we now refer to our other shell executing the netcat listener we should see the output of the callback.
There we have it, we successfully called our FaaS OCR function from curl and made a successful call back with a valid, correct and well formed response.
Conclusion
In this article we have successfully demonstrated the configuration of a lab OpenFaaS environment on a local kubernetes cluster (in this case using Kind). We have installed the OpenFaaS client faas-cli and performed some base introspection of the deployed kubernetes components and introduced the Web UI.
We have successfully deployed an OCR function from the OpenFaaS function store, then utilized both the faas-client invoke and the straight http interface to synchronously perform OCR on our image text.
We then furthered our example by introducing the capability to asynchronously call our function by obtaining the result via a callback pattern.
In Part 2 of this series we will start to implement our OpenFaaS functions using the Go programming language. See you there.
[1] https://files.gotocon.com/uploads/slides/conference_13/728/original/welcome_serverless_20.pdf