miรฉrcoles, 18 de mayo de 2022

What is Nylas, why did I join and why should you care?



I joined Nylas around four months ago as their first Senior Developer Advocate. It might seem like a short amount of time, but in the startup world, time moves differently.

What is Nylas?


In a nutshell, Nylas is a set of APIs that allows you to easily connect with any email, calendar and contacts provider without having to go through a lengthy and complex configuration process.

In other words, we do the heavy work so you can focus on your business.


When it comes to Email, we offer “One Email API for Every Provider”. This means our Email API is universal and you can connect to various providers without having to write specific code for each. Also, we provide real-time, bi-directional sync, and full CRUD capabilities. If you want to learn more, just go to our Universal Email API page.

When it comes to Calendar, we offer the same capabilities as our Email API, but also conferencing sync, events metadata, and programmatic webhooks. If you want to learn more, just go to our Universal Calendar API page.

When it comes to Contacts, you can create, update or delete contacts in any supported provider. If you want to learn more, just go to our Universal Contacts API page.


With Email, Calendar, and Contacts APIs, we’re just getting started. Check out what we offer on top of these APIs to give developers even more power.

With Neural API, exploring the world of AI and ML is easy, with ready-made models that can be used to clean conversations, extract signatures, perform OCR, and run sentiment analysis. If you want to learn more, just go to our Neural API page.


Nylas Streams is our ETL solution that requires little to no code to transform and consume communications data for E-Commerce, Sales, Fintech and Customer Success. If you want to learn more, just go to our Nylas Streams page.


Scheduler, a full featured scheduler with customizable UI. If you want to learn, just go to Scheduler.


Components, ready made and fully flexible UI/UX widgets ready for immediate use. If you want to learn more, just go to Components.

Why did I join?


The first thing that caught my attention with Nylas was the fact that they provide APIs to make communications easier. Handling Email, Calendars and Contacts gives a lot of space to create amazing applications.

The second thing that attracted me to Nylas was the company culture. Everybody looked committed to each other, as I saw a lot of internal support when the Nylas account posted something on social media, and also my interviews were more like chatting with friends than actual interviews.

The third and probably most important thing for me, was going back to Developer Advocacy, which is something that I’m really passionate about. Being able to share with the community is something that makes me feel good and that makes me a better person and a better developer.

Obviously, those 3 things might not be enough to convince you, so let’s create a small example.

For this I’m going to choose one of my favorite Programming Languages– R. And while I’m not an R professional or expert, I’m very passionate about it, so bear with me, there might be better ways to do this.

What we are going to do is simply read the first three messages in my inbox and print the subjects.


The result is going to be:

A Nylas email 
Invitation: RRSP Webinar @ Tue Feb 8, 2022 1pm - 2pm (EST) (alvaro.t@nylas.com) 
Start here: Welcome to Clockwise

Obviously, while R is an awesome language used by statisticians and data scientists around the world, it is not a mainstream language.

That’s why Nylas provides us with SDKs for Python, Ruby, Node, and Java.

Let’s see how we can read our inbox using the Python SDK. If you want to learn how to install the SDK, read our Python Documentation.


The result is going to be:

A Nylas email 
Invitation: RRSP Webinar @ Tue Feb 8, 2022 1pm - 2pm (EST) (alvaro.t@nylas.com) 
Start here: Welcome to Clockwise

As you can see, using the SDK is easier, because we don’t need to traverse the JSON response or figure out where the element that we want to print is. Also, if something changes internally we can rest assured that the SDK will be updated to reflect any changes while simply calling the API will require some manual work. And the same goes for Calendar, Contacts, and the rest of our offerings.
Why should you care?

Well, to begin with, I’m your friendly Developer Advocate so you know I’m going to provide you with constant and interesting content about the Nylas APIs. Also, signing up for a Nylas account is easy, no Credit Card is required and you can get 14 days to try it out. Ready to go? Just go to https://dashboard.nylas.com/register and follow the instructions.

Also, I’m already working on a series of blog posts that will help you to get started, make your first API calls and in overall, get the full Nylas experience.

You can start by reading my blog How to Send Emails with the Nylas Python SDK

Don’t think Nylas is for you? Well, recommend us to a friend then. You might know someone who could benefit from having fast, easy, and convenient access to universal communication APIs.

sรกbado, 26 de enero de 2019

A Vector Inside

When I finished my blog called “Hey Vector, who do I look like?” using Anki’s Vector and SAP Leonardo’s Machine Learning APIs…I started thinking about what I should work on next…at first…I was obviously running low on ideas…but then…all of a sudden…a nice idea came to me…what if we could control Vector…from the inside? I mean…what if we could simulate that we are inside Vector and that we can see through his eyes and make him move…

That’s how this project started ๐Ÿ˜‰


The Idea


So, I knew I wanted to be able to control Vector…I could use Amazon Alexa, but of course…that would leave the “inside” part out…so that was not a choice…then I thought about using Unreal Engine…as I used it on my blog “SAP Leonardo Machine Learning APIs on the Go” where I used SAP Machine Learning APIs, Unreal Engine and Oculus Go to showcase the APIs on SAP Leonardo. Using Unreal Engine and Oculus Go seemed like the perfect combination, so I started working on it.


What are we going to use?


I know that I mentioned many things…so let’s get more information about them ๐Ÿ˜‰

Vector

Anki’s next evolution of Cozmo. Vector packs not only more power and more independence but also a microphone, so you can finally talk to him ๐Ÿ˜‰ and it also comes with Amazon Alexa…so it’s just an amazing little robot…

Unreal Engine

Unreal Engine is without a doubt “The most powerful creating engine”. Can be programmed using C++ or Blueprints (Visual Programming) and the best of all…it’s totally free! Unless you make a commercial game that sells…then they ask only for the 5%


Blender

Blender is an Open Source 3D Creation Suit. Modeling, rigging, animation, simulation, rendering and a long etcetera…bundled with version 2.8, comes EEVEE (Extra Easy Virtual Environment Engine) a real time rendering engine.


HANA Cloud Platform

SAP’s In-Memory, column-oriented database running on the cloud. That includes Predictive Analysis, Spatial Data Processing, Text Analysis and much more. Also, it’s blazing fast ๐Ÿ˜‰


Python3

Interpreted, high-level, general-purpose programming language. It’s the language chosen to code Vector.



The First Problem


As I wanted to see through Vector’s eyes…I needed to display Vector’s video feed on Unreal Engine…I spend some time thinking about how to do that…but in the end I remembered that when you use Microsoft HoloLens and pair it to your laptop for “streaming”, there’s always a delay of 1 or 2 seconds…then of course I remembered that videos are just hundreds of images that are displayed in sequence in a very fast way…I didn't care too much about speed or being close to real time…having a 1 or 2 seconds delay…it’s not a bad thing at all…

Well…the problems continued…I knew that I wanted Vector to take a picture every 1 or 2 seconds…and then that picture should reach Unreal Engine…and despite the fact that I code in Python on my Ubuntu Virtual Machine and Unreal Engine on my regular Windows laptop…I wasn't too sure on sending the image as a file…so…I got the idea of encoding it as Base 64 (Which yes…increases the size…but at least gives you a huge string to deal with) and send it…but how? Well…SAP Cloud Platform is an in-memory database…so it’s pretty fast…why not create a table and some REST APIs to deal with the creation, view and deletion of records…


Every time Vector takes a picture...it gets converted to Base 64 and then send to the cloud, then Unreal Engine read the API, decode the Base 64 back to an image and display it…that lead to the second problem…

The Second Problem


How do I encode a Base 64 image on Unreal Engine? While I know how to use C++ very well…when it comes to Unreal Engine I mostly use Blueprints, which is visual programming and while underneath is C++ not everything is implemented…

Gladly, a quick visit to Unreal’s documentation gave me an answer in the form of a Base 64 encoding/decoding function…but in C++ of course…

But…the good thing about Unreal Engine is that you can create a C++ project…implement your Base 64 encoder class and then start creating Blueprints to consume it…another problem tackled…

That’s what I thought…but then I realized that it wasn’t just a matter of having the picture back as a picture…I actually needed a dynamic material where I could display the images…I browse the web and found some interesting articles…but nothing that could really helped me…in the end…I grab pieces from here and there and my own research…and managed to make it work…

And yes…if you’re wondering…there was another problem…

The Third Problem


Everything was nice and dandy…I test my solution…initial by passing some images in sequence to the cloud and then to Unreal and then using Vector…with a delay of 1 or 2 seconds…it looked like a video running on an old smartphone on a cheap Internet connection in the middle of the desert…good enough for me ๐Ÿ˜‰

But…we were supposed to be inside Vector, right? How was I supposed to simulate that? After not too much thinking…I decided to use Blender 2.8 which is on Beta right now ๐Ÿ˜Š and that comes with EEVEE (Which is an awesome real-time renderer). I made a small “control” room…with some fancy button and panels…a chair where you can seat while controlling “Vector” and a big screen to see what Vector is seeing….

Baking is not working right now…or at least I’m too dumb to figure out how to use it on 2.8 so using textures on Cycles was out of the question…so…I made a test on EEVEE using just plain materials…export them as .FBX and they worked like a charm! So, I stared working and test it on Unreal…of course…I’m not a Blender expert, so while everything looks nice…not all the colors are rendered correctly ☹ At least it looks fairly decent ๐Ÿ˜‰

No more problems


Yep…not that everything went nice and smoothly…but at least those were the most critical problems…so now we can actually start with the blog -:P 

Blender and The Control Room


As I said…I used EEVEE rendering on Blender 2.8 Beta to create a control room that would somehow, give the impression of being inside Vector…of course…totally and completely a poetic version because a) Who know how Vector looks inside? b) I don’t think there’s enough room inside vector to fit anything else…

First, I started by putting some buttons, knobs and keys to a panel…then I added some sort of radars along with sliders…


Then, I thought some switches and multi-colored buttons would make a nice addition…


Finally, I added a chair…because you need to sit somewhere, right?


The screen is just a white space…pretty much like in a cinema…


Look, pretty cool on Blender, right? Well…not so much on Unreal…not ugly…but certainly not optimized…probably due to the fact that I merged everything together and exported as a big chunk…but that’s fine…I’m not changing that…I’m lazy ๐Ÿ˜Š


You see…not perfect…but not that bad either ๐Ÿ˜‰

Here’s the .FBX file

Creating the Tables and APIs on HANA Cloud Platform


Next step…I created two tables on HANA Cloud Platform…I called the first table “VECTOREYES” because it’s the table that will hold the Base 64 images. Here’s the script to create it…

CREATE COLUMN TABLE "I830502"."VECTOREYES"(
 "TIMESTAMP" LONGDATE CS_LONGDATE NOT NULL,
 "VECTOREYE" CLOB MEMORY THRESHOLD 3000,
 PRIMARY KEY (
  "TIMESTAMP"
 )
) UNLOAD PRIORITY 5 AUTO MERGE;

For the primary key I used a TIMESTAMP basically because if something happens in terms of connection there would be no primary key clashes…

The next table will be called “VECTORCOMMAND” and will hold…the commands that will send to Vector…

CREATE COLUMN TABLE "I830502"."VECTORCOMMAND"(
 "NID" INTEGER CS_INT,
 "COMMAND" NVARCHAR(50),
 PRIMARY KEY (
  "NID"
 )
) UNLOAD PRIORITY 5 AUTO MERGE;

In this case…there’s going to be always one command…so I used a single integer primary key.

With the tables created, we can generate our XS Engine package…and simply call it “VectorEyes”.

Create the following files…

.xsaccess
{
     "exposed" : true,  
                  
     "authentication" :                                            
            {
               "method": "Basic"   
            },
  
     "cache_control" : "must-revalidate", 

     "cors" :                      
            {
             "enabled" : true,
             "allowMethods": [
   "GET",
   "POST",
   "HEAD",
   "OPTIONS"
   ]
            }, 
                     
     "enable_etags" : false,

     "force_ssl" : false,
     
     "prevent_xsrf" : false
}


.xsapp


Yep…not a typo…this is actually totally and completely empty…


AddVectorEye.xsjs
$.response.contentType = "text/html";

var conn = $.db.getConnection();

var content = $.request.body.asString();
content = JSON.parse(content);

var st = conn.prepareStatement("INSERT INTO \"YourSchema\".\"VECTOREYES\" values(?,?)");

st.setString(1,content.timestamp);
st.setString(2,content.vectoreye);

st.execute();
conn.commit();
st.close();
conn.close();

GetAddVectorEye.xsodata
service namespace "YourSchema"{
 "YourSchema"."VECTOREYES" as "vectoreye";
}

DeleteVectorEye.xsjs
$.response.contentType = "text/html";

var conn = $.db.getConnection();

var st = conn.prepareStatement("DELETE FROM \"YourSchema\".\"VECTOREYES\"");

st.execute();
conn.commit();
st.close();
conn.close();

With that, we can insert, read and delete the VECTOREYES table. Let’s continue with VECTORCOMMAND table files…

AddVectorCommand.xsjs
$.response.contentType = "text/html";

var nid = $.request.parameters.get("nid");
var command = $.request.parameters.get("command");

var conn = $.db.getConnection();

var st = conn.prepareStatement("INSERT INTO \"YourSchema\".\"VECTORCOMMAND\" values(?,?)");

st.setString(1,nid);
st.setString(2,command);

st.execute();
conn.commit();
st.close();
conn.close();


GetVectorCommand.xsodata
service namespace "YourSchema"{
 "YourSchema"."VECTORCOMMAND" as "vectorcommand";
}


DeleteVectorCommand.xsjs
$.response.contentType = "text/html";

var conn = $.db.getConnection();

var st = conn.prepareStatement("DELETE FROM \"YourSchema\".\"VECTORCOMMAND\"");

st.execute();
conn.commit();
st.close();
conn.close();

That’s it ๐Ÿ˜Š We simply need to activate it and test it…for sure Postman is the way to go ๐Ÿ˜‰

Creating our Unreal Engine project


As I mentioned earlier...I created an empty C++ Project using Mobile/Tablet, Scalable 3D or 2D and No starter content. I used Unreal Engine version 4.21.1 and called the project “VectorOculusGo”




When the project is open, I selected “File --> New C++ Class”, and chose “Actor”.


I called the class “ImageParser” and used the following code for “ImageParser.h” and “ImageParser.cpp”

ImageParser.h
#pragma once

#include "CoreMinimal.h"
#include "GameFramework/Actor.h"
#include "ImageParser.generated.h"

UCLASS()
class VECTOROCULUSGO_API AImageParser : public AActor
{
 GENERATED_BODY()
 
public: 
 // Sets default values for this actor's properties
 AImageParser();

protected:
 // Called when the game starts or when spawned
 virtual void BeginPlay() override;

public: 
 // Called every frame
 virtual void Tick(float DeltaTime) override;

 UFUNCTION(BlueprintCallable, Category = "ImageParser")
  void ParseImage(FString encoded, TArray &decoded);
};

Here we’re creating a function that can be called via Blueprints. It will receive a String and will return an array of integers.

ImageParser.cpp
#include "ImageParser.h"
#include "Misc/Base64.h"

// Sets default values
AImageParser::AImageParser()
{
  // Set this actor to call Tick() every frame.  
        // You can turn this off to improve performance if you don't need it.
 PrimaryActorTick.bCanEverTick = true;

}

// Called when the game starts or when spawned
void AImageParser::BeginPlay()
{
 Super::BeginPlay();
 
}

// Called every frame
void AImageParser::Tick(float DeltaTime)
{
 Super::Tick(DeltaTime);

}

void AImageParser::ParseImage(FString encoded, TArray &decoded)
{
 FBase64::Decode(encoded, decoded);
}

Here, we simply call the Decode method from the Base64 library. This will grab the Base 64 string and converted back into an image.

In order to compile, we just need to right-click on the project name and select “Debug --> Start new instance”.


After the compilation is done, we can simply stop the debugging.

Before we continue…we need to download a library to manage Rest APIs…it’s called JSONQuery and it’s amazing!

Simply close Unreal, go to the project folder and create a new folder called “Plugins”, then download the .zip, unzip it inside the “Plugins” folder file and delete the Binaries and Intermediate folders.

Then, you will need to change the source code a little bit…

Inside the “JSONQuery” folder, go to “Source --> JSONQuery --> Classes --> JsonFieldData.h” and look for “GetRequest”.

After const FString& url add const FString& auth

Then open “Source --> JSONQuery --> Private --> jsonfielddata.cpp” and look for the same “GetRequest”.

Here, add the same const FString& auth.

After the HttpRequest->SetURL(CreateURL(url)); add the following…

HttpRequest->SetHeader(TEXT("Authorization"), auth);

Save both files and open your project. You will get a message saying that part of the code needs to be recompiled. So simply accept and wait a little bit until everything gets compiled ๐Ÿ˜Š

To check that everything is fine, go to “Settings --> Plugins” and go all the way down to find “Project --> Web” and JSON Query should be selected. ๐Ÿ˜‰




Awesome, let’s continue.

In order to make our project work on the Oculus Go, we need to setup a couple things.

Setting up the Oculus Go


You may want to setup your Oculus if you haven’t done that already ๐Ÿ˜Š Here’s a nice link with all the explanation you need…

Setting up Unreal for Oculus Go


We need to install “Code Works for Android” which is actually bundled with your Unreal Installation. So, go “Program Files --> Epic Games --> UE_4.21 --> Engine --> Extras --> AndroidWorks --> Win64” and run “CodeWorksforAndroid-1R7u1-windows.exe”.

You will notice that you are inside the C++ Classes folder, so just click on the folder icon next to it and select “Content”.



Don’t pay attention to the folders for now.

First, save the current map and call it “MainMap”. Then go to “Edit --> Project Settings”. Look for “Maps & Modes” and select “MainMap” in both “Editor Startup Map” and “Game Default Map”.


Then go down and select “Engine --> Input”. On the “Mobile” section set the Default Touch Interface to None.


Move down to “Platforms” and select “Android”. Click on “Configure Now”. Then move to “Android”. Set the minimum and target SDK version to “19”.

Also click on “Enable Fullscreen Immersive on KitKat and above devices” to enable it.

Look for “Configure the AndroidManifest for Deployment to Oculus” and enable it as well.

Now, click on “Android SDK” and check the configuration. If you don’t have the System Variables configured, then simply assign the folder paths.

Finally, go to “Engine --> Rendering” and make sure that “Mobile HDR” is not selected.

If something is not clear, just go to this link ๐Ÿ˜‰ 

Alright, now we can finally move on ๐Ÿ˜Š

Creating a Dynamic Material


Click on “Add New --> Material” and call it “Dynamic_Mat”. Once inside the material editor, right-click on an empty space and look for “TextureSampleParameter2D”.




Once created, name it “Texture_Sample”. It will come with a default texture that you can change if you want (But doesn't matter in the end). Simply connect the first output to the “Base Color” of the “Dynamic_Mat” node.


Save it and it will automatically applied. The good thing about this setup is that the Param2D is dynamic ๐Ÿ˜‰ 

Creating our first Blueprint


Create a new folder and call it “Blueprints”. Here we’re going to create the screen where the images coming from Vector are going to be displayed.

Press “Add New --> Blueprint Class”.


Instead of choosing “Actor” as the parent class…go down to “All Classes” and look for “Image Parser” and select it as parent class.


Name it “ImageRenderer”.

Once created, go to the Viewport tab and click on “Add Component --> Cube”. Simply change its scale to “0.01, 1.0, 1.0”.



Then switch to the “Event Graph” tab. This is where we are going to build our Blueprints.

But first, we need to create a couple of variables.

CubeMaterial --> Material Instance (Object Reference)


This is going to be the material of the cube that we created.

TempImg --> Texture 2D (Object Reference)


This is where we’re going to store the image after converting it from Base 64 to image.

TempMat --> Material Instance (Object Reference)

This is the dynamic material that is going to be assign to our cube.

ImageJSON --> String

This is the result from calling the API…the Base 64 string.

With the variables ready, we can start creating the first piece of the Blueprint.


Here, we are saying that once our application starts (Event BeginPlay) we’re going to call a function called “Set Timer by Function Name”. This function will call another function every 2.0 seconds (as we ticked the Looping value). The called function will be “MyEvent”.



Here, we are calling the function “MyEvent”, which will call “Get JSON Request” by passing the URL and the Auth. This will be bound to the “OnGetResult” event. The result from the JSON call will be extracted by using Get Object Field, Get Object Array Field, a For Loop and finally a Get String Field in order to get the Base 64 image and store it on the ImageJSON variable.


After setting the ImageJSON variable, we call the API to delete the table. After this…things get interesting…


Here we are calling our C++ Class “Parse Image” like another Blueprint element. We get the value stored on ImageJSON to be decoded as an image. The result of decoding the Base 64 string will go into “Import Buffer as Texture 2D”, which will go into the TempImg variable. After this “Create Dynamic Material Instance” will apply a Dynamic material to our Dynamic_Mat material and assign it to TempMat that will passed as the target of Set Texture Parameter Value, while TempImg will be pass as the value parameter. Finally, a Set Material node will assign the TempMat material to the Cube.

To make it simple…we grab the Base 64 string…convert it into an image…create a dynamic material, use it as the value for our dynamic material and finally we assign this to our cube. Every time we get a new Base 64 value, we will get a new image and our cube will be able to display it ๐Ÿ˜‰

Importing our Blender model


Now, we need to simulate that we’re inside Vector…hence…we need to import our Blender .FBX model ๐Ÿ˜‰

Simply press Import and select the .FBX file that you can get from here. Press Import All and you will have it on the screen.

Change the following parameters…


Now, add a Point Light with these parameters…


Next, grab the “ImageRenderer” Blueprint and drag it into the screen. Change the parameters like this…


Then, press “Build” and wait till everything (including the lights) get built.


Once the build is done…you will have this…


Awesome! Everything is starting to take shape ๐Ÿ˜Š But now…we need to add the real Oculus Go support ๐Ÿ˜‰

Adding Oculus Go support


So, we configured our project to work in an Oculus Go…but that’s not enough ๐Ÿ˜‰ We need to do a couple of extra things…and of course…and most important…we need to add a way to control Vector using the Oculus Go controller ๐Ÿ˜Š

Create a new folder and call it “Modes”. Then create a new “Blueprint Class” but this time choose “Pawn” and name it “Pawn Blueprint” (Smart, huh?).

When it opens up, go to the left section and select “DefaultSceneRoot”, then click on “Add Component” and select “Scene” and change its name to “VRCameraRoot”.

Select “VRCameraRoot” and add a “Camera” component, name it “VRCamera”.

Select “VRCameraRoot” and add a "Motion Controller" component, name it “OculusGoController”.

Select “OculusGoController” and add a "Static Mesh" component, name it “OculusGoMesh”.

To make clear…here’s a screenshot ๐Ÿ˜Š


With the “OculusGoMesh” selected, go to its properties and on the Static Mesh one, choose “OculusGoController” mesh.


After this, we need to create some variables…the first one will “CameraHeight” and will be an editable “Vector”.



The second one will be called “request” and would be a Json Field Data (Object Reference).

Finally, create one called “Lift” of type Boolean and a String variable named “Var”.



If you’re wondering about the open eye next to “CameraHeight”, that simply means that its “Public”, and you can change it by clicking on it.

Now, we can continue on the “Event Graph” tab.


Here, we want that when the applications starts (Event BeginPlay) the tracking origin gets set to our eyes level. The node SetRelativeLocation will be called where the target will be the VRCameraRoot and the new location would be set to the CameraHeight. In other word, what we see its going to be on our eyes level.



When we press the Thumbstick Up or Forward, Left or Right, we assign the result to our Var variable, then we call the Get JSON Request function. The URL would be the API address  plus the value of the Var variable.


Here, we want to click on the “Back” button of the Oculus Controller. The first time we click the “Lift” variable is going to be “False”, so we make it “True”. If its “True” then we send the “up” command. If we click again, we make it “False” and pass the “down” command. This way we can control Vector’s lift handle.

Alright, compile, save and that’s done ๐Ÿ˜Š We simply need to add it to our scene. So, drag it and change these parameters.


Also, and this is very important…


Auto Possess Player should be Player 0.

Now, press “Build” and wait till everything (including the lights) get built.

Then press Play…and you will see this…



Of course, if you try to move using your mouse…nothing will happen…so you need to send it to your Oculus Go ๐Ÿ˜‰

To do that, simply go to Launch and select your device…it will take a long time the first time because all the shaders, Blueprints and so on need to be compiled…but after that, you will be able to put on your headset and look around ๐Ÿ˜Š Although…you’re not going to see anything on the screen because we still need to get Vector up and running ๐Ÿ˜‰

Installing Vector’s SDK


First, make sure Vector is connected to the Internet by using the Vector app…here’s a nice video on how to do that…

Once you check that, kill the app from your phone…as it might interfere with your own application taking control of Vector…

You can install the SDK by doing

python3 -m pip install --user anki_vector

Then…authenticate your Vector by doing…

python3 -m anki_vector.configure

You will be asked for Vector’s name, ip address and serial number. Also, you will be requested for your Anki Cloud credentials.

To get this information, simply put Vector on his charger…and press his top twice. This will give you his name, then lift up and down his handle in order to get the IP. The serial number is on Vector’s bottom.

Creating Vector’s script


This script is the last part of our journey ๐Ÿ˜Š Simply create a new file called VectorOculusGo.py

VectorOculusGo.py
import anki_vector  #Control Vector
import requests  #Use REST APIs
import json  #Consume JSON
import time  #Manage time
from anki_vector.util import degrees, distance_mm, speed_mmps
import base64 #Encode/Decode images
import datetime  #To get time and data

#URLs to manage upload of Base 64 images and to control Vector using the 
#Oculus Go controller
urlAddEye = "https://YourHANA.ondemand.com/VectorEyes/AddVectorEye.xsjs"
urlGetCommand = "https:// YourHANA.ondemand.com/VectorEyes/
                 GetVectorCommand.xsodata/vectorcommand"
urlDeleteCommand = "https:// YourHANA.ondemand.com/VectorEyes/
                    DeleteVectorCommand.xsjs"
   
def main():
    #We stablish a connection with Vector and enable his camara
    robot = anki_vector.Robot(enable_camera_feed=True)
    #We connect to Vector
    robot.connect()
    i = 0
    #We want this to loop forever…until we close the program
    while i == 0:
        #We instruct Vector to take a pictures
        image = robot.camera.latest_image
        #And save it
        image.save("./img/Temp.png")
        #Once saved, we open it
        with open("./img/Temp.png", "rb") as imageFile:
            #We get the time and create a timestamp
            ts = time.time()
            timestamp = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
            #We enconde the picture as an Base 64 string
            strImg = base64.b64encode(imageFile.read())
            #The payload is the parameters that we are sending to the REST API 
            payload = "{\"timestamp\":\"" + timestamp + "\",\"vectoreye\":\"" + 
                        strImg.decode('ascii') + "\"}"  

            #In the headers, we pass the authentication for the REST API        
            headers = {
                'Content-Type': "application/x-www-form-urlencoded",
                'Authorization': "YourAuthorization",
            }

            #We upload the Base 64 string of the image to the DB
            response = requests.request("POST", urlAddEye, data=payload, headers=headers)
            #We put the application to sleep for 2 seconds just not to overload the DB
            time.sleep(2)
            querystring = {"$format":"json"}
            #Right after uploading the Base 64 string, 
            #we want to get any commands coming through
            response = requests.request("GET", urlGetCommand, headers=headers, 
                                        params=querystring)
            #We convert the response to JSON
            json_response = json.loads(response.text)
            #We need to check if there’s any information first and then extract the command
            try:
                json_text = json_response['d']['results'][0]['COMMAND']
            except:
                json_text = ""
            #Depending on the command, we make Vector move forward, backward or 
            #lift his handle. If the lift was already up, we put it down first…
            if (json_text == 'forward'):
                robot.behavior.drive_straight(distance_mm(50), speed_mmps(50))
            elif (json_text == 'backward'):
                robot.behavior.drive_straight(distance_mm(-50), speed_mmps(50))
            elif(json_text == 'right'):
                robot.behavior.turn_in_place(degrees(-90))
            elif(json_text == 'left'):
                robot.behavior.turn_in_place(degrees(90))
            elif(json_text == 'up'):
                robot.behavior.set_lift_height(0.0)
                robot.behavior.set_lift_height(1.0)
            elif(json_text == 'down'):
                robot.behavior.set_lift_height(0.0)
            #After receiving the command, we simply delete it from the DB
            response = requests.request("GET", urlDeleteCommand, headers=headers)
                
if __name__ == '__main__':
    main()

Nice, the source code is self-explanatory…but still…let’s go through what is going on this application…

We want Vector to take a picture every 2 seconds…once a picture is taken, we want to convert it into a Base 64 string and then along with a Timestamp (which is a date with hours, minutes and seconds) send it to the Database. Once that’s done…we rest for 2 seconds and check if there’s any command available. If there’s one, we make Vector act accordingly…and just to avoid keep repeating the same command…we simply delete it from the Database, so we can simply issue a new command.

Putting it all together


Great! Now we have our application running on the Oculus Go and our Vector ready to execute our script.

So…get ready a Terminal or CMD window with the following line…

python3 VectorOculusGo.py

Put on your Oculus Go headset, grab your controller and the hit “Enter” on your keyboard. Our script will start running and you will see what Vector is looking at…something like this…


I know…that’s actually running on Unreal Engine and not on the Oculus…but that’s what the video is for ๐Ÿ˜‰


I hope you like this blog and enjoy controlling Vector from the inside! -:D

Greetings,

Blag.
SAP Labs Network.