rob siwicki
7 min readDec 18, 2022

Introduction to On-Premises Serverless Function as a Service with OpenFaas and Go — Part 3

In this third part of our OpenFaaS and Go Series we will attempt to demonstrate how OpenFaaS and serverless FaaS can be used to deliver integrations between two hypothetical cloud native systems — a cloud product provider API and our own FinOps product — to achieve tremendous speed to market.

Situation

Our organization is extremely interested in FinOps and the potential futures of what FinOps and Cloud Compute as resources could mean for both optimizing business value and even the personal effectiveness and motivations of developers — e.g. developers that write cost efficient code vs. other types of optimisation could be more valuable. We have software products that assist with these heavy analytical and predictive tasks; though in our case need to integrate them with a collaborating source system. The source system is a microservices design, deployed to kubernetes for ordering digital infrastucture and data products; such as compute time on GPU and data products. We need to integrate to understand how to bill our customers accurately and how to understand what products customers are entitled to use.

Our Assumptions

With this scenario in mind we will start to use a FaaS apprroach to provided custom integration suitable for using our FinOps and Entitlements systems for our hypothetical on-premise digital product ordering service using some familiar concepts.

  1. We assume we have a series of APIs that can generate an exhaust of Domain Events from our private cloud as customers perform billable operations: such as instance of product X launched; e.g. Customer Bob has launched a new instance of a managed Machine Learning (ML) service and its consuming GPU time from t1 to t1+n.
  2. We are interested in showback, chargeback and billing our customers for using our products.
  3. We are interested in entitlements too; so understanding if our Customers are using licenses or otherwise utilizing some other non-fungible, finite and consumable digital resource.
  4. Our Product APIs (collaborating source systems) follow an event sourcing paradigm where domain events are stored in Kafka topics as notional eventstores. These events will be represented as JSON.
  5. We want multiple triggered FinOps event functions to operate directly from the singleton eventstore (in this case we are using a Kafka topic as an eventstore).
  6. The combination of analysis of billing and entitlement allows us to optimize costs for our customers whilst maintaining obligations to any third-party vendors regarding matters such as licensing and allows the optimisation of pricing strategies based on cloud provider data and forecasting with ML.

So clearly this FinOps stuff is quite important to us.

The Answer

First, we need some method of triggering an OpenFaaS serverless function call from Kafka. The PRO edition of OpenFaaS is required to obtain a license for Kafka Event Triggers. However; given the expedience of our need for deploying FinOps we have decided to roll-our-own Kafka event connector [link to repo to be provided in Part 4].

We added the ability of our Kafka Connector to provide Consumer Group IDs for each instance and also if required the ability to start from a specific Kafka offset. We wanted to start considering the use of offset management for our serverless functions in the case where we might or might not want to replay events from a certain timeframe.

Note: We would want to understand later downstream patterns such as idempotency in the event that operating the system resulted in replay of events.

Next we need to code our functions. The functions in our real product call on durable storage solutions; in both FinOps Billing and Entitlements we use clustered postgres RDBMS at the moment though these are protected by secured APIs; for simplicity in our code demonstration for this demo, we will merely log aspects of the events instead of transforming and persisting them.

We can follow the method used in Part 2 of our series to create two new OpenFaaS functions.

  1. faas-register-billing-event
  2. faas-register-entitlement-event

To begin let’s configure the relevant ENV VAR as we did in Part 2.

export OPENFAAS_PREFIX=robrockdataio

Then we create the function artefacts from the Go templates:

faas-cli new — lang go faas-register-billing-event
faas-cli new — lang go faas-register-entitlement-event

Thanks to the power of OpenFaaS we now have two projects for our functions ready to implement. One for billing and one for entitlements.

Next we update our handlers respectively (note these only log, unlike our would be real handlers that interact with our FinOps databases).

So for the billing event handler.go file:

import (
"fmt"
)
// Handle a serverless request
func Handle(req []byte) string {
return fmt.Sprintf("Billing event detected: %s", string(req))
}

And the entitlement event handler.go file:

import (
"fmt"
)
// Handle a serverless request
func Handle(req []byte) string {
return fmt.Sprintf("Entitlement event detected: %s", string(req))
}

This is simple enough for our demo purposes; though, note that the real handler would introspect and filter the events; of course business logic would be applied. Events could even be processed and forwarded on to other subscribing systems.

For each of the OpenFaaS registration .ymls (if you are unsure please revisit Part 2) update the annotations with the topic name and also a suffix (-e for entitlements and -b for billing).

Note: we are subscribting the OpenFaaS topic names to add an extra level of abstraction to ensure that with our own OpenFaasKafkaConnect component that each connector has a separate consumer group id and therefore independent Kafka offset.

faas-register-entitlement-event.yml

version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
faas-register-entitlement-event:
lang: go
handler: ./faas-register-entitlement-event
image: faas-register-entitlement-event:latest
annotations:
topic: product-launch-eventstore-e

faas-register-billing-event.yml

version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
faas-register-billing-event:
lang: go
handler: ./faas-register-billing-event
image: faas-register-billing-event:latest
annotations:
topic: product-launch-eventstore-b

Let’s build, push and deploy our entitlement demo function.

faas-cli build -f faas-register-entitlement-event.yml
faas-cli push -f faas-register-entitlement-event.yml
faas-cli deploy -f faas-register-entitlement-event.yml

Now the same for our billing function.

faas-cli build -f faas-register-billing-event.yml
faas-cli push -f faas-register-billing-event.yml
faas-cli deploy -f faas-register-billing-event.yml

Our demo functions are now built, pushed to our registry and deployed to our OpenFaaS cluster.

We can see this by issuing the following command:

faas-cli list -v

Which displays the following for my lab cluster:

Next we will configure our own OpenFaaSKafkaConnector[s] to run in our development environment; where we have already configured a local Kafka cluster (in Part 4 we will deploy the connectors inside our cluster as we would do in a production scenario).

PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 - decode; echo)

export BROKER="127.0.0.1:9092"
export USER_NAME="admin"
export GATEWAY="http://127.0.0.1:8080"
export TOPIC="product-launch-eventstore"

./OpenFaasKafkaConnect - gateway http://127.0.0.1:8080 \
--broker $BROKER \
--gw-username $USER_NAME \
--gw-password $PASSWORD \
--topic $TOPIC \
--of-topic "product-launch-eventstore-b" \
--consumer-group-id "billing-event-consumer-1" &

./OpenFaasKafkaConnect - gateway http://127.0.0.1:8080 \
--broker $BROKER \
--gw-username $USER_NAME \
--gw-password $PASSWORD \
--of-topic $TOPIC \
--of-topic "product-launch-eventstore-e" \
--consumer-group-id "entitlement-event-consumer-1" &

We can now attempt to generate product launch domain events to the product launch event storer to simulate our cloud products operation:

First we start our test Kafka console producer to simulate the population of our event store:

bin/kafka-console-producer.sh — topic product-launch-eventstore — bootstrap-server localhost:9092

We then submit an event denoting the launch of a python notebook product such as Jupyter that has GPU resource enabled. It is then expected that our entitlement and billing functions detect and register these events for processing. For example, the time utilising GPU resources can be ascertained for billing and the entitlement of the customer (customer_acount: 123456) can be verified to determine if they can indeed use this resource and what type and share of GPU resources are allocated.

{
"event_type":product-create-request",
"datetime_processed":"2022–12–12 13:08:02.263590513 +0000 UTC m=+424059.360227751",
"product_type":"gpu-kernel-instance-start",
"datetime_emitted":"2022–12–12 13:08:02.263597333 +0000 UTC m=+424059.360234569",
"customer_account":"123456",
"payload":"gpu enabled notebook started"
}

Here we can see that our custom OpenFaaSKafkaConnector is able to detect both the billing events (1) and entitlement events (2).

Conclusion

In this Part 3 of our series we start to use the OpenFaaS solution to help demonstrate how FaaS can be used to integrate FinOps functionality for our products with extreme speed to market. The specific details of the internals of our FinOps solution wanders into our own intellectual property, so we won’t explore our business logic, algorithms, ML, dashboarding and forecasting functionality here; though, the OpenFaaS functions we named and demonstrated show the potential for on-premise FaaS technologies and how in a cloud native style they can be used to deliver business value with rapidity.

In Part 4 of the series we will package our Kafka connector for production style deployment. We will then integrate our functions using this method via a production grade Kafka deployment; in this case that supplied by Stackable. We will also explore operational aspects of using OpenFaas such as scaling our functions.

No responses yet