Blog posts of '2021' 'September'

Session Attributes


Session attributes can be treated like special variables which you can use to store and retreive information across various intents.

Why is this important?

Variables declared inside your intents using 'var', 'const' or 'let' are scoped within those intents, meaning you can no longer call those variables once the intent is over.

Session attributes are global, meaning you can retreive their data from within any intent.

How do I use them?

The first recommended step is to get the sessionAttributes path using the following line:

const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();

This line essentially just saves you from typing the entire path every time. The location it points to is where you can get existing session attributes, or make new ones.

Now you can make a new session attribute and set its value.

sessionAttributes.<VARIABLE NAME> = <VALUE HERE>;


sessionAttributes.myVariable = "test string"; // remember this variable -- we will call it later!

This is similar to declaring a variable using 'var', 'const' or 'let'.

The value can be anything you want. You should set the value to be something which you want the skill to remember and use across various intents.

Finally, you need to set the session attributes which you have declared. Do so using this line of code:


This line always comes next, and is necessary for your session attributes to be saved.

To call sessionAttributes from within a different intent, you first need to get sessionAttributes again:

const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();

Now you can recall any of your session attributes like this:

sessionAttributes.<VARIABLE NAME>



To test this, you could use console.log() to check the value of myVariable.

console.log(sessionAttributes.myVariable); // output: "test string"

Practical example

Get, then set.

const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();

sessionAttributes.forename = "john";

sessionAttributes.surname = "smith";

sessionAttributes.favColour = "blue";


The next step is retreiving your session attributes from a different intent.

First, get the sessionAttributes.

const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();

Now you can retreive your sessionAttributes like this:

console.log(sessionAttributes.forename); // output: "john"

console.log(sessionAttributes.surname); // output: "smith"

console.log(sessionAttributes.favColour); // output: "blue"

Further reading

The Gloucester Park Skill

We have worked hard on a skill made for the community of Gloucester Park and for those who wish to visit. Please check the video below to see some of the features that this skill has:

We are now going to begin posting regularly on our social media accounts so please check them out.



Why You Should Make a Custom Skill

Every day our lives are flooded with the latest trends and technology, making it hard to keep up with. This can start to make us feel like we are being left behind, but we here at Roybot are here to offer you a stress-free service to create and publish your very own Custom Skill!

Voice assistants are becoming increasingly popular and a common misconception is that only very skilled IT technicians would possess the knowledge necessary for creating these sorts of bots, but this is far from the truth. Anyone can create their very own voice assistant with the right tools. One way you can do this is through Amazon’s ‘Alexa’. If you have ever dreamed about getting creative with coding, working with Roybot will be a dream come true!

So, why should you make a Custom Skill?

It’s less time consuming

Making presentations in the past stemmed around lengthy research, lots of papers, planning and script drafting but by creating a Custom Skill, the time required is cut down significantly.


It’s paper-free and environmentally friendly

As a Custom Skill is entirely digital, there will be no need whatsoever for any documents, reducing paper wastage and offering an environmentally friendly alternative to present information.

There is no need for scripts or speeches

There is no need to revise any sort of script since the Skill uses ‘artificial intelligence’ to quickly and easily inform users with short and quick sentences when prompted to. This means that a user can ask questions and have the answer within seconds!

It’s hands-free

A Custom Task is entirely ‘voice activated’ meaning that a user will have both hands free, eliminating the need to sift through your work.

You can learn new skills, including how to program

Programming a Custom Skill is easy and anyone can pick it up, even a beginner. This opens more opportunities for you to learn a new skill without needing to enrol in lengthy, expensive coding courses.

It’s tech-savvy and creative

Artificial Intelligence has gained popularity over recent years and making a Custom Skill is a smart, interesting way for you to show off your creativity with the ability to use and create breath-taking displays to go with your voice assistant.

It’s used by both businesses and individuals

Custom Skills can be used to showcase your skills as an individual, such as with a ‘voice profile’, allowing potential employees to view your CV without reading through pages.

Businesses can use a Custom Skill to ease the stress of managing custom care lines, offer answers quick and easily via voice commands from the customers and inform them of useful information about products.

Learning to use APIs with Wikipedia



API stands for “application programming interface”.

When you send a request to an endpoint, its API will interpret your request and then perform the action you have specified.

In this scenario, “endpoint” is the URL which we use to make requests.


I will be using the MediaWiki API in order to demonstrate how we can request information from English Wikipedia:

The endpoint we will send our requests to will be:



How do I specify what information I want?


Let’s say we want to get the first few lines from the Wikipedia page about Amazon Alexa.

In order to tell the API what we want, we need to add some parameters to the endpoint.


We start the string of parameters with a question mark.

Then, if you have multiple parameters, connect them using the “&” symbol.


Here is a snippet of JavaScript code which makes our request slightly easier to read and construct.

const myPageTitle = “Amazon_Alexa”;

const endpoint = ""

const params = “?action=query” // query is one possible action.

+ “&prop=extracts” // “extracts” is the name of an extension used by many wikis.

+ “&exsentences=3” // this lets you get the first 3 sentences from a page.

+ “&exlimit=1”⠀

+ “&titles=” + myPageTitle// this lets you specify which page you want information from.

+ “&explaintext=1” // means “extracts plain text”, which is human-friendly for reading.

+ “&format=json” // the data will be returned in JSON format

+ “&formatversion=2” // the JSON will be easier to navigate using index notation

+ “&origin=*”; // prevents a CORS error

Now we have a complete URL which we can use to make our request:

const alexaWikiUrl = endpoint + params;⠀

console.log(alexaWikiUrl); // output:*

Try visiting the link in your browser!

You will be presented with the information which would be returned if you were to make the request using code.


In order to make API calls using your Alexa skill, you need to install a package which lets you do so.

There are many choices, but I recommend either node fetch or Axios. Node fetch appears to be more commonly used, but Axios is easier for me personally.

Here is an example of what a fetch request might look like using node fetch:

const wikiEndpoint = '';

const wikiParams = "?action=query"

+ "&prop=extracts"

+ "&exsentences=3"

+ "&exlimit=1"

+ "&titles=" + "Amazon_Alexa"

+ "&explaintext=1"

+ "&format=json"

+ "&formatversion=2"

+ "&origin=*";

const myUrl = wikiEndpoint + wikiParams;



async function getData(url){

let res = await fetch(url);

let data = res.json();

return data;



getData(myUrl).then(data => {



If you were to use Axios, it might look something like this:

async function getWikiData(){

const wikiEndpoint = '';

const wikiParams = "?action=query"

+ "&prop=extracts"

+ "&exlimit=1"

+ "&exsentences=3"

+ "&titles=" + "Amazon_Alexa"

+ "&explaintext=1"

+ "&format=json"

+ "&formatversion=2"

+ "&origin=*";

const wikiLink = wikiEndpoint + wikiParams;


var wikiConfig = {

timeout: 6500


async function getJsonResponse(url, config){

const res = await axios.get(url, config);



return getJsonResponse(wikiLink, wikiConfig).then((result) => {

return result;

}).catch((error) => {

return null;



const wikiData = await getWikiData();

const wikiOutput = wikiData.query.pages[0].extract;


In both cases, the first 3 sentences from will be logged.

The code in the node fetch example should work in your browser console.

If you want to test it out, try using this code and experimenting by changing some parameters.

Axios isn’t so easy to test, but the provided example works when implemented correctly, and should be able to serve as a good foundation.

Useful resources

How To Enable The Skill

In the video below you can check out how to easily enable skills for your Alexa device! 


Slots allow your skill to be more interactive and allow the user to be able to input information
into your skill which allows the the skill to change depending on the users input. This means
that a single intent will show different information depending on what the user is asking for.

We have used this method to create a timesheet for a charity which allows the user to choose
a day in their input and then the skill will respond with whatever the information is that
corresponds with that specific day. This also allows the user to be able to ask the skill
"what activities are happening today" and the skill will work out what day it is and respond
with the correct information. This also works with "tomorrow" or any day from Monday to Sunday.

API integration

API's are a way to connect your alexa skill to third party applications, such as linkedin,
twitter, bbc, flickr, and many more. This allows you to display information from these sites
on your skill.

The picture used in this skill is taken from Matthew Cackett (mattc68).

This example is of using the flickr api in your skill which allows you to link your skill
to flickr. This will make it so that you are able to show pictures from the site on your skill.
This makes it possible for us to make a personal gallery skill, so that all the images you
upload to flickr can be browsed through all from your skill. You could also make it so that
the user could say the username of any account and search through all the public photos thats
been uploaded to that account.

SSML (Speech Synthesis Markup Language)

This is how you change the voice of Alexa in terms of what it sounds like and how it
pronounces, enthasises words. You can make the voice male or female with different accents,
to allow the interaction with the skill to feel more like an actual conversation you are
having with a human to gain the information you are looking for. This is a video of some of
the different voices and interjections that can be used in the skills.

When we work with you in creating your new skill we will talk about what you would like the
voice to sound like and what kind of mood you want to set throughout the skill. As with
good use of the interjections and voice clips you are able to make the skill more serious
or light hearted depending on what you want the skill to be about.