Bixby Home Studio (BHS), a web-based graphical interface tool, helps developers design the voice interfaces between the Bixby virtual assistant and smart devices registered with SmartThings. In BHS, you create logical action flows using connected nodes and save them in voice metadata files. You can add intelligence to the voice commands by creating voice metadata. You can also test your action flows and submit them for testing, review, and release (available to the user).
Launch BHS in your browser by visiting https://bhs.bixbydevelopers.com.
BHS displays the Settings window. If you're not logged in to your Samsung Account, you'll receive a prompt to sign in.
If BHS doesn't open the Settings window by default, click on the Settings icon towards the bottom of the left-side menu bar.
To configure BHS for your device in the Settings window:
Connecting a SmartThings device enables you to associate multiple voice capabilities with that device. At the bottom left of the screen, click the Device Details icon. A menu opens up.
Select a location to load the list of devices present in that location. Select a device from the list. Your device is now configured. You can create metadata for it, and then the action flows for the voice intents.
You will see the following device details in the window that appears after you click the Device Details icon:
To create a new project, start by clicking the ⊕ icon in the left menu bar. If you haven't created any projects yet, you can also click the New Project button in the sidebar.
Select the SmartThings location and device from the drop-down menus. Then choose one of these options:
For more information about which option to choose, see the appropriate section below.
Choose this option if your device already supports existing SmartThings capabilities and can therefore use BHS' general purpose graph metadata. Many SmartThings devices (switches, bulbs, thermostats, plugs, etc.) and SmartThings-compatible third party devices support SmartThings capabilities.
If the SmartThings Device you select supports existing SmartThings capabilities, you will be able to select Create metadata using SmartThings capabilities of the device. If the SmartThings Device you select doesn't support existing SmartThings capabilities, this option will be disabled.
To learn more about exising SmartThings capabilities, see the list of standard SmartThings capabilities.
Click NEXT. Now you're prompted to select voice intents from the selected device's existing capabilities. Voice intents are a set of user's intentions that Bixby can understand from natural voice commands. Different capabilities support various voice intents. Select voice intents.
For more information on voice intents, read the voice intents reference page.
Click NEXT. Add the project name and click DONE.
Choose this option if you want to create all of your own metadata. If the SmartThings Device you select doesn't support existing SmartThings capabilities and you're not planning to work on an existing metadata file, you should select this option. If your SmartThings Device doesn't support existing SmartThings capabilities, you can create metadata from scratch by using different capabilities, which utilize protocols that are different from those of SmartThings capabilities, to create your BHS graph.
Click NEXT. Now you're prompted to select voice intents supported by a specific Bixby Voice Category. Select a Bixby Voice Category. Then, select voice intents from the list.
Click NEXT. Add the project name and click DONE.
Choose this option if you're working with other people, so you can download an existing metadata file from the server. You can modify the downloaded metadata and publish it to the cloud.
Click NEXT. Now you're prompted to select voice intents from released projects. Select an MNID (the manufacturer ID assigned to developers by SmartThings). Then, select a VID (vendor identifier number assigned to your device) from the list. Click NEXT.
To edit the metadata after the project is created, click the BHP Metadata icon at the top left corner of the menu bar in the main editor window.
BHS now displays its main editor screen with the voice intents you added or that were included in the existing metadata you selected, displayed in the Voice Intents section of the left sidebar.
To view or update device configuration information, click Metadata profile in the left sidebar. A new tab opens, which contains the following information:
BHS follows design patterns you'll likely be famililar with if you've used other graphical user interfaces for Internet of Things (IoT) devices: a sidebar on the left and a larger tabbed editor window to the right.
The BHS interface has five main areas:
In the editor, you can create action flows, execution graphs that start with a user's spoken command and lead to the appropriate commands to send to the smart device. To create logical action flows to export to your device, you simply drag and drop nodes in the editor. It has a grid background to help you organize your flows. Also, you can easily move nodes around and connect them to each other. You can also select a group of nodes by dragging a highlight box around them, in order to move, delete, or copy them.
At the top right of the editor, you also have options to do the following:
BHS’ left margin displays five icons, three at the top and two at the bottom. These Activity Bar icons switch between various tasks used in action flow development:
You can drag these nodes into the home editor window to create the action flow graph. Each node performs a specific action. An action flow contains different types of nodes connected by execution and data paths. By associating a voice intent with an action flow, you specify the behavior the device should exhibit for that voice intent.
Nodes are the functions that developers can use in BHS. A node consists of the following components:
The action flow nodes are grouped by category:
To learn more about the various types of nodes, see the Nodes reference page.
To learn more about action flow nodes and graphs, read the Sending a Device Command guide.
To streamline your graph, click Align at the top right of the editor.
Voice intents are a set of user's intentions that Bixby can understand from natural voice commands. Voice intents determine what voice commands (utterances) can be used to control a device.
For every voice intent, there's an option to either create a new action flow or provide the payload directly in JSON format. You can also import existing metadata for a device to use instead. Different capabilities support various voice intents.
If you want to create and use an action flow for a voice intent, navigate to the relevant Voice Intent menu in the left sidebar, and click Graph.
To learn more, visit the Voice Intents reference page.
To add additional voice intents from the main editor screen:
In the left sidebar, under Metadata profile, click the "+" symbol to the right of "Voice Intents." This opens the "Add Voice Intent" window.
If you can't see the left sidebar, click on the "BHP Metadata" icon in the top left corner of the menu bar to open it.
Select a Category from the drop-down.
Select a voice intent from the list of potential voice actions and sample utterances for the chosen category.
Click ADD. You can view the voice intent you added in the Voice Intents section.
The bottom of the window shows your device information and your account email address. It also allows you to do the following:
To view the Problems tab in the console, click the icons for warnings and errors in the status bar along the bottom of the editing window. This opens a pane at the bottom that shows current issues with the project.
You can choose which kinds of messages to show by checking or unchecking the boxes for Warnings and Errors.
To display the status of your submissions to BHS, click Submissions in the Activity Bar. You can also view the submission history and state transition details.
To learn more about submissions, read the Publishing The Bixby Device Graph (BHS Metadata Submission) guide.
You can use sample action flow graphs to build your own graphs. To drag and drop a sample graph to the editor, first click Graph under your chosen voice intent in the Metadata Sidebar. Then, in the Activity Bar, click the Sample Graphs icon. Then, drag and drop a graph to the editor.
Here are some additional resources on sample graphs:
Introduction to Sample Graphs: Describes sample graphs and how to use them.
Basic Sample Graphs: Highlights basic sample graphs.
Aside from creating new metadata, you can also load, import, export, backup, or restore metadata. This allows you to collaborate with other device developers on your action flows, as well as gives you backups for flows you are working on.
You can update the existing metadata of the device, to add or edit voice actions for it. To do this, click the Load Metadata icon at the top of the Metadata Sidebar. Then, you can choose metadata to load, by clicking the New or History tabs. The metadata listed under the New tab consists of existing metadata for devices that you have never loaded. The metadata listed under the History tab consists of existing metadata for devices that you have loaded before.
If you want to collaborate and share your voice metadata, you can import and export the voice metadata in a
You can load your voice metadata to the editor. To do this, click on the Import button in the menu bar in the menu bar on the top left of the screen.
Click Import to select the metadata
Click OK to load the metadata file.
You can upload created voice metadata to a server or share it with another developer using the Export feature. To do this, click on the Export button in the menu bar on the top left of the screen.
Enter the filename to save to:
Click on the Export button and your Voice Metadata will be downloaded as a
To backup your metadata, click the Backup Metadata icon at the top of the Metadata Sidebar. As mentioned earlier, you can set the maximum number of backups by clicking the Settings icon at the bottom right of of the Activity Bar.
To restore the metadata that you last worked with, click the Restore Metadata icon at the top of the Metadata Sidebar.
To view the BHS version, you can click the About icon at the bottom of the activity bar.
To provide the payload directly in JSON format, click Raw at the top right of the editor.
A window opens where you can add your raw JSON.
You can use the Try it out feature of the editor to test if the action flow works as intended on a real device. To test, click on the Try It button in the menu bar.
You can see the green flowing dashed line over the execution path as shown below. Any obtained values or responses are shown below the corresponding nodes.
If you don't have an actual device to test on, you can use the SmartThings simulator to create a virtual device. For more information, see the FAQ: Creating a virtual Device SmartThings community page.
Video Tutorial: Simulated Devices