Bixby Developer Center

Guides

Selection Learning

Bixby will at times prompt users when they must make a selection to proceed. For example, if the user says "Weather in San Jose," Bixby must find out which of the many cities in the world the user is interested in. What might seem like the right option for some users (people living in San Jose, CA) can be the completely wrong answer for other users (San Jose, Costa Rica). These are important concerns in a global, dynamic system such as Bixby. Initially, Bixby doesn't know much about the user and how to confidently make a selection in that context. It can then prompt a user more often for clarification. This is an example of a selection prompt from a fixed list of values. In this case, once the user selects "San Jose, CA", Bixby learns what types of cities a user might ask about for weather queries. This information is also used anonymously to help new users in the system with similar requests.

Selection Learning is Bixby's way of automatically learning about users from their selections. It helps accelerate and personalize the user's interaction with your capsule by automatically making selections for the user. When a search action returns multiple results and one must be selected from that set (that is, many results are being returned for an input with a max(one) cardinality), Selection Learning helps Bixby to disambiguate and select the best options.

Bixby automatically learns what a specific user likes to select, as well as what users like to select in general. The more Bixby knows about a specific user based on their past selections, preferences, and context, the more likely it will automatically make a personalized selection. New users, however, are more likely to see Bixby automatically choose the best selection based on what it has learned from interactions of all users. For example, if the majority of people select "Dublin, IE" when users request "Weather in Dublin", Bixby starts to learn that the factors of why Dublin, IE was selected. If that is not what a user wanted to be selected, they can always change the selection that Bixby makes. Doing so also teaches Bixby about the user's preferences as well as what is generally the best selection for that context. Ultimately if Bixby is unsure about what option to select, it will default to the first result that you as the developer provide or prompt the user.

Bixby further enhances its learning by considering the user's context with every selection. By considering context such as time, location, and action, Bixby varies its behavior to better match a user's habits, routines, and needs. Keep in mind that Bixby learns the factors that help make a selection, not the specific selection decision. In doing so Bixby can generalize what it learns. It doesn't need to see all possible interactions before knowing what to select in the future for a new context. Bixby learns on its own what context factors are important for making a decision. It adapts its behavior constantly based on how users interact with Bixby. Selection Learning is just one of the many ways that Bixby learns something new each day.

Consider the example of a ride-sharing app. Users might make different selections based on time of day. For example, users might need to get to work early in the morning and thus choose not to share a ride with others. However, in the evening when time is not a strict constraint, they might choose to save money and ride with others. With enough evidence, such as when the user makes selections and corrections using the Understanding Page, Bixby learns to automatically make those different decisions for the user.

Selection Prompts

You control when Bixby uses Selection Learning. As the expert of your domain, you can override Bixby's ability to use Selection Learning on an "as-needed" basis. For example, what if the user has previously booked a flight with multiple passengers? You always want to confirm who is flying. Otherwise, Bixby might assume that the person or people who previously booked the flight are flying again.

You can do this through adding prompt-behavior to the action and setting it to AlwaysSelection.

action (AddPassenger) {
type (UpdateTransaction)

input (flightBooking) {
type (FlightBooking)
min (Required) max (One)
}
input (passenger) {
type (Passenger)
min (Required) max (One)
prompt-behavior (AlwaysSelection)
}

output (FlightBooking)
}
Note

Even though AlwaysSelection forces the user to make a selection and ignores Selection Learning, Bixby still learns from the selection that the user makes.

When this prompt-behavior is not specified, Selection Learning automatically makes a decision whenever there are more options than an action needs based on min and max input cardinality requirements.

Enable Selection Learning

You must explicitly enable Selection Learning for the action input you want Bixby to learn. When you include a with-learning block within a default-select block in an action input declaration (as shown in the example below), you instruct Bixby to learn the geo.NamedPoint that each user individually prefers during weather.FindWeather actions. Based on the actions and Selection Strategies that you define, Bixby dynamically learns the best geo.NamedPoint for each user and their context.

action (weather.FindWeather) {
type (Search)

input (where) {
type (geo.NamedPoint)
min (Required)

default-select {
// Enabled Selection Learning w/o any Selection Behaviors
with-learning { }
}
}
...
}

For example, if a user asks "What is the weather in Dublin," the city is actually ambiguous because Bixby knows of at least four different cities named Dublin:

  1. Dublin, IE
  2. Dublin, CA
  3. Dublin, GA
  4. Dublin, OH

When you enable Selection Learning, Bixby learns which Dublin the user is likely to use when checking the weather. This also applies when the user selects among a number of geo.NamedPoint concepts in general. This kind of automatic learning is unique for each user. Even new Bixby users benefit because Bixby selects what is likely the best choice based on the collective behavior of other Bixby users. As user needs change, they can teach Bixby a new Dublin they prefer at any time, and Bixby will adapt and learn.

See the Selection Behavior section for details on how to specify behavior options within the with-learning block. For additional examples of selection learning, see the Selection Learning sample capsule.

Selection Learning and Selection Rules

You can enable Selection Learning as well as Selection Rules for action inputs. At any time, you can add Learning, Selection Rules, or both to improve and personalize the users experience. When you enable Selection Learning, Bixby references it first to see if it has learned the best options to select for a user. At that point, Bixby will determine one of the following:

  • Selection Learning is confident about the options to automatically select for the user. In this case execution uses these options, and Selection Rules, if specified, are not referenced. Bixby also does not prompt the user.
  • Selection Learning is not confident about what to select for the user. In this case, execution will reference Selection Rules, if specified. If at least one Selection Rule is specified, an option is always selected automatically for the user. If no rules exist, Bixby prompts the user.

So when should you add Selection Learning, Selection Rules, or both? Selection Rules are a great way for you to encode selection default behavior. Rules provide a way to deterministically choose the best options for all users. An action input with only rules will not learn different selections based on context, nor will it learn different, personalized behavior for each user. In order for Bixby to learn the most context-aware and personalized selection behavior, you should enable Selection Learning and specify Selection Strategies to help Bixby learn your prompts.

Selection Strategies

Bixby relies on you, the developer, to help inform Selection Learning on what are the best choices for users. For example, when users search for coffee shops near them, how far away should Bixby look for coffee shops? Now answer the same question if users are looking for concerts. In most cases, users are willing to travel further for a concert than a coffee shop. As a developer, you can provide ranges and ratings to Bixby so that it can make better recommendations.

In some cases, you might want to recommend against selections that simply don't make sense. For example, if you are tracking stock prices, all stock prices will occur in the present or past, not the future.

As the developer, you use Selection Strategies to provide this advice. Note that there are no strategy components related to context (time and location). Bixby's learning algorithms automatically determine which pieces of context are helpful in picking the best option for a user based on their interactions with the system. This includes, among other things, where and when users made requests, where they made selections, and where they completed transactions.

Selection Strategy Components

Selection strategies are made of two components: match and advice.

The match pattern determines where you want to provide a strategy, and the advice determines the ranges and scores you provide.

Here is an example selection strategy for geographic search:

selection-strategy {
id (user-near-search-region)

match {
viv.geo.SearchRegion (this)
}

named-advice ("near-current") {
advice ("${calculateDistance($user.currentLocation, this.pointRadius.centroid, 'Miles').magnitude}")
advise-for { lowerBound(0) upperBound(25.0) }
// advise-against in this case is redundant, but is included for
// demonstration purposes
advise-against { lowerBound(25.1) }
}
}

Notice the match pattern, which looks for the SearchRegion action. Then look at the named-advice, which does three things. First, using the centroid, it determines how many miles away a place is from the user. Next, it provides advice to Bixby that, if a specified centroid is 25 miles or less, advise for it. Conversely, if the centroid is more than 25 miles, advise against it. These ranges can't overlap, and any values not in these ranges are ignored.

advise for and against

Here is another example from the Selection Learning sample capsule that chooses a ride share based on the preferred-dropoff-eta:

selection-strategy {
id (prefer-dropoff-eta)
match {
RideShare(this)
}
named-advice ("prefer-dropoff-eta") {
advice ("${this.dropoffETA}")
advise-for { lowerBoundOpen (0.0) }
}
}

View master on GitHub

You can write each piece of advice using Expression Language expressions and functions, as explained in documentation for Expression Language (EL).

Here is an example selection strategy for stock search:

selection-strategy {
id (stock-past-tense-date)


match: viv.time.Date (this) {
to-input: viv.stock.FindStockInfo
}

named-advice ("accept-past-only") {
// For stocks, advise against the future... we don't predict the future...
advice ("${isPast(this) ? 1.0 : isFuture(this) ? -1.0 : 0.0}")
advise-for { lowerBoundClosed (1.0) upperBoundClosed (1.0) }
advise-against { lowerBoundClosed (-1.0) upperBoundClosed (-1.0) }
}
}

In this selection strategy, the advice provides scoring. If a specified date occurs in the past, it assigns a score of 1.0. Conversely, if the date occurs in the future, it assigns a score of -1.0.

These scores should be monotonically increasing or decreasing. Don't worry too much about the values you assign. It's a dynamic world, and Bixby's learning algorithms will do the hard work for you!

With all of these components, you have a full selection strategy. Bixby's learning algorithms will then use those scores, range assessments, and context as it dynamically learns what strategies and scores to trust in the Selection Learning process for each user. See Selection Strategies and Selection Learning Best Practices for more details.

Note

You can also use simplified syntax with selection-strategy to handle common filtering by combining both the advice with the advice-for or advice-against keys.

Examples

selection-strategy {
id (reject-bitcoin-payment-restaurants)
match {
food.AcceptsBitcoin(acceptsBitcoin) {
to-input: food.FindRestaurant(_)
}
}
advise-against ("${acceptsBitcoin}”)
}
selection-strategy {
id (prefer-thai-restaurants)
match {
food.CuisineStyle(style) {
to-input: food.FindRestaurant(_)
}
}
advise-for ("${style == ‘Thai’}”)
}

Selection Strategy Example - Gratuity

Consider a capsule that calculates the gratuity (tip) on a bill. In addition to the bill total, you need a tip percentage to calculate the tip amount as well as the total. In this selection strategy, Bixby learns the tip percentage so that users don't have to specify a tip percentage with every request. Bixby simply uses the percentage it has learned.

Here is the capsule enum model for tip percentages, which includes percentages that users can select if they don't specify one in the original request:

primitive (gratuity.TipPercentEnum) {
type (Enum)
description("Tip percent")

symbol(10%)
symbol(15%)
symbol(18%)
symbol(20%)
symbol(25%)
}

You can then write a simple and direct selection strategy that specifies a named-advice block for each of the possible default percentage enums:

selection-strategy {
id (learn-tip-percentage)

match: gratuity.TipPercentageEnum (this)

named-advice ("prefer-10%") {
advice ("${this eq '10%' ? 1.0 : 0.0}")
advise-for { lowerBoundClosed (1.0) upperBoundClosed (1.0) }
}

named-advice ("prefer-15%") {
advice ("${this eq '15%' ? 1.0 : 0.0}")
advise-for { lowerBoundClosed (1.0) upperBoundClosed (1.0) }
}

named-advice ("prefer-18%") {
advice ("${this eq '18%' ? 1.0 : 0.0}")
advise-for { lowerBoundClosed (1.0) upperBoundClosed (1.0) }
}

named-advice ("prefer-20%") {
advice ("${this eq '20%' ? 1.0 : 0.0}")
advise-for { lowerBoundClosed (1.0) upperBoundClosed (1.0) }
}

named-advice ("prefer-25%") {
advice ("${this eq '25%' ? 1.0 : 0.0}")
advise-for { lowerBoundClosed (1.0) upperBoundClosed (1.0) }
}
}

You can implement a similar selection strategy in various ways. In the strategy below, a single named-advice uses a switch statement to quickly assign a score to each of the possible enum values:

selection-strategy {
id (learn-tip-percentage)

match {
gratuity.TipPercentageEnum (this)
}

named-advice ("prefer-percentage") {
switch (this) {
case (10%) {
advice ("${10.0 }")
}
case (15%) {
advice ("${15.0 }")
}
case (18%) {
advice ("${18.0 }")
}
case (20%) {
advice ("${20.0 }")
}
case (25%) {
advice ("${25.0 }")
}
default {
advice ("${ 0.0 }")
}
}

advise-for { lowerBoundClosed (10.0) upperBoundClosed (25.0) }
}
}

Selection Strategy Example - Weather

Let's go back to the example of weather. If the user makes the request, "Weather in Dublin" from San Jose, CA, there are 4 total Dublin localities that Bixby's geo providers know about:

  • Dublin, IE
  • Dublin, CA
  • Dublin, GA
  • Dublin, OH

If a user makes an ambiguous query, "Weather in Dublin," which Dublin should Bixby select for the user? Bixby attempts to learn the factors that influence the selection and encode that knowledge in one or more strategies.

If you don't write a strategy, Bixby uses a default selection, which depends on the order of returned results. If you turn off default decisions, Bixby prompts the user in this case until the developer writes selection strategies to help Bixby learn the best default selection.

Consider if the user is in the same country as one of the options. The user-in-country geo strategy below can distinguish the Dublin in Ireland (option 1) from the Dublins in the United States (option 2,3, and 4). With just this strategy, Bixby's learning can learn to pick Dublin, IE if that is what the user selects for this request. But Bixby could not select the right Dublin in the United States for the user since there is nothing in the strategy to distinguish between the 3 different US-based Dublins.

selection-strategy {
description (Prefer NamedPoints that are Localities where the user is in the same country as the locality.)
id (user-in-country)

match {
geo.NamedPoint {
from-output: viv.geo.ConstructNamedPointFromRegion {
from-input: viv.geo.Locality (locality)
}
}
}

named-advice ("user-in-country") {
advice ("${exists(locality.country) && within($user.currentLocation, locality.country.shape) ? 1.0 : 0.0}")

advise-for { lowerBound(1.0) upperBoundClosed(1.0) }
advise-against { lowerBound(0.0) upperBoundClosed(0.0) }
}
}

To counter that limitation, you can add a strategy that indicates if the user is in the same level-one division (such as a US state) as the option such as with the user-in-level-one strategy below. If the user is in the same division as the option they select (for instance, California) and selects Dublin, CA, that would be enough advice to help Bixby learn to pick Dublin, CA for the user. But if the user selects Dublin, GA or Dublin, OH, they are both states that are different from the user's state. Bixby is still unable to learn the correct selection. However, if the user selects Dublin, IE in this scenario, this additional strategy still provides more useful information to help Bixby learn that user preference.

selection-strategy {
description (Prefer NamedPoints that are Localities where the user is in the same level one as the locality.)
id (user-in-level-one)

match {
geo.NamedPoint {
from-output: viv.geo.ConstructNamedPointFromRegion {
from-input: viv.geo.Locality (locality)
}
}
}

named-advice ("user-in-level-one") {
advice ("${exists(locality.levelOne) && within($user.currentLocation, locality.levelOne.shape) ? 1.0 : 0.0}")

advise-for { lowerBound(1.0) upperBoundClosed(1.0) }
advise-against { lowerBound(0.0) upperBoundClosed(0.0) }
}
}

Bixby needs more information to distinguish between the Dublins in GA and OH. Perhaps the population could help decide that. You can add a strategy as prefer-by-population below that will assign a score to each Dublin in this example based on their population. Bixby's learning algorithms learn what ranges of population represent the localities they select in different contexts.

selection-strategy {
description (Ranks by population.)
id (prefer-by-population)

match {
geo.NamedPoint {
from-output: viv.geo.ConstructNamedPointFromRegion {
from-input: viv.geo.Locality (locality)
}
}
}
named-advice ("population") {
advice ("${exists(locality.population) ? locality.population}")

advise-for { lowerBoundClosed(0.0) }
}
}

Strategies like this can include factors that influence a user's decision. It doesn't have to be a global truth for all users and all decisions. Bixby learns the strategies to trust when making a selection based on a user's past behavior. Strategies do not actually have to be correct, complete, or unique. Incorrect strategies that conflict with user behavior can still help Bixby learn the way the system should act for a user. Strategies can be incomplete by not providing advice for all possible results. You do not have to consider how all possible strategies you write behave together. Bixby's learning algorithms automatically determine the strategies to trust, in what context, and even for what results.

Consider one more selection strategy for this example. The user-in-locality strategy below indicates which option is within 25 miles of the user. In this example, the user is in San Jose, CA, and if they select "Dublin, CA", this strategy teaches Bixby that the locality is not only within the same level-one division, but also within 25 miles of them.

selection-strategy {
id (user-near-named-point)

match {
geo.NamedPoint (this)
}

named-advice ("near-current") {
advice ("${geoDistance($user.currentLocation, this.point, 'Miles')}")
advise-for { lowerBound(0) upperBound(25.0) }
}
}

In the UI, there is an affordance where users can see and also update selections that Bixby makes.

You can add more specialized match patterns to customize advice for specific situations. We encourage you to contribute as many selection strategies as needed to reasonably inform the expected results that Bixby suggests.

Improving the Learning Rate of Selection Strategies

After you have written the number of strategies, you can expect Bixby to reasonably discriminate between all the expected options for your selection prompt. You might then be more concerned with the rate of learning and how many selections a user needs to make before Bixby learns to select an option automatically. You can help improve Bixby's algorithms in a variety of ways:

Learning Rate Example - Non-Binary Scores

Remember that the advice score itself is an opportunity to discriminate your options further. Consider an example capsule learning to pick a card out of a deck of 52 traditional playing cards. You might want to write one named-advice block as follows:

selection-strategy {
id (learn-to-pick-a-card)
description (Binary Feature)

match { PlayingCard (this) }

named-advice ("for clubs") {
advice ("${this.suit eq 'clubs'? 1.0 : 0.0}") // Binary Feature (FOR clubs)
advise-for{ lowerBoundClosed(1.0) }
}

This is enough to discriminate the club cards from all other non-club cards. Bixby algorithms will learn the combination of strategies that are needed to learn a specific club card based on the other discriminative strategies you would likely include. This includes advice for red vs. black, hearts, diamonds, spades, and the face value score of a card. Assuming all these other strategies are binary as well, there isn't much, other than the face-value-score advice to distinguish, for example the 9 of clubs from the 7 of clubs.

There is an opportunity to include that distinction in the advice itself here by using the face-value of the card as the score to further discriminate the options, as shown below:

selection-strategy {
id (learn-to-pick-a-card)
description (Binary Feature)

match { PlayingCard (this) }

named-advice ("for clubs") {
advice ("${this.suit eq 'clubs'? this.value : 0.0}") // Scored (non-binary) Feature (FOR clubs by face-value score)
advise-for{ lowerBoundClosed(1.0) }
}

With no additional cost to the Bixby's learning algorithms, you are now helping it discriminate between not just club vs. non-club card suit, but also between all the different club cards based on the face-value of the card. Remember to use the actual advice score to help discriminate your options further.

Learning Rate Example - Composite Features

You can also improve Bixby's Selection Learning rate to explicitly include composite features that Bixby would normally have to learn on its own through more data/selections. Consider again an example capsule learning to pick one out of a deck of of 52 traditional playing cards. With a sufficient number of selections, Bixby's algorithms will learn the combination of advice features that represent the selection a user prefers. You, as the developer, can improve that rate of learning and help Bixby see those combinations that might be representative of the user's selection behavior by explicitly writing those strategies. For example, you might have already written the binary advice below to distinguish between the different color and card suits:

selection-strategy {
id (learn-to-pick-a-card)
description (Binary Feature)

match { PlayingCard (this) }

named-advice ("color") {
advice ("${this.color eq 'red'? this.value:0}")
advise-for { lowerBoundClosed (1.0) }
advise-against { upperBoundClosed (0.0) }
}

named-advice ("clubs") {
advice ("${this.suit eq 'clubs'? 1.0 : 0.0}")
advise-for{ lowerBoundClosed(1.0) }
}

named-advice ("spades") {
advice ("${this.suit eq 'spades'? 1.0 : 0.0}")
advise-for{ lowerBoundClosed(1.0) }
}

named-advice ("diamonds") {
advice ("${this.suit eq 'diamonds'? 1.0 : 0.0}")
advise-for{ lowerBoundClosed(1.0) }
}

named-advice ("hearts") {
advice ("${this.suit eq 'hearts'? 1.0 : 0.0}")
advise-for{ lowerBoundClosed(1.0) }
}
}

Bixby doesn't inherently know about playing cards and that certain suits can only have a specific color. It learns that through a combination of the color advice and the suit advice feature. The developer can encode this knowledge by defining a single learning feature/advice that defines both:

selection-strategy {
id (learn-to-pick-a-card)
description (Binary Feature)

match { PlayingCard (this) }

named-advice ("black clubs") {
advice ("${this.color eq 'black' and this.suit eq 'clubs'? 1.0 : 0.0}")
advise-for{ lowerBoundClosed(1.0) }
}
}

Selection Behavior

This section discusses the selection behavior of values and how to adjust the behavior of Selection Learning, if needed.

Value Sources (Prior to Selection Learning and Selection Rules)

Selection Learning and Selection Rules do not provide input values. When enabled, they help choose among the options where there are too many for a single action to use for execution. These input values can come from multiple sources. If required, they are retrieved in the following order until the minimum number of value(s) that are required by the action are found:

  1. Direct User Input - For example, the FindRecipes action could require a single ingredient, but the user provides two or more in their request, like "Find chicken and egg recipes".

  2. Provider API - For example, another action or service provides a list of ingredients to use, but FindRecipes can only take a single ingredient.

  3. Conversation - Multiple requests in the same capsule are considered a conversation. The platform uses signals from previous conversation requests if required. For example, the user had a conversation with Bixby using the recipes capsule and has mentioned ingredients during the requests. Previously mentioned ingredients will be used if needed, using the newest first.

  4. Default Value - Values enumerated or intent-based values from the default-init key that the platform will execute and retrieve.

  5. Instantiation Strategies - If there are no options available from any of the above, instantiation strategies (if present) are evaluated and the results merged.

  6. User Prompted - If after all of the above has happened and the required input is missing, the user is prompted to either input something in a value prompt or choose from a selection prompt.

Again, if at any time during the input evaluation there are more values than an action can use, then, if enabled, Selection Learning and then Selection Rules are evaluated.

Selection Behavior Adjustment

There might be times when you want to override or adjust the behavior of Bixby's Selection Learning. Through selection behavior, you can have fine-grained control of learning for your capsule. Selection behavior is specified within an action input.

Here are some options for controlling Selection Learning that come with selection behavior:

  • RankingOnly: Rank options by personal learning instead of having Bixby select an option. Use this when you want to prompt the user but also have Bixby learn the best order of options for the user (with the top-most option being the best).
  • NoRanking: Don't use Bixby's own ranking and instead use the order provided by the provider. This is useful when you have a specific order you want to use when presenting options.
  • NoSharing: Only use personal learning for a particular input. Do not use shared learning. This is useful in cases where you don't want community learning to influence selections that are meant to be personal.
  • NoPreferences: Do not consider the user's enabled Preferences as a factor in Selection Learning.
Note

With the RankingOnly selection behavior, Bixby always prompts the user and never selects automatically. However, AlwaysSelection (part of prompt-behavior) bypasses Selection Learning and prompts the user. The effect looks similar as Bixby never makes a selection. However, since Selection Learning is not called in the case of AlwaysSelection, there is no ranking of the options.

Consider, for example, a capsule that allows for ride sharing. If you enable Selection Learning (with-learning) without adjusting the behavior, Bixby might automatically pick a vehicle type for the user that might not always be the option they want. To address this, you can use the RankingOnly Selection Behavior, which ensures that Selection Learning is used to rank options but not pick one. Use the option property of with-learning to set the Selection Behavior:

action (ChangeRideShare) {
type (Search)
description (Prompt for a new ride share. Used only to change the desired rideshare type.)

collect {
computed-input (rideShare) {
type (RideShare)
min (Required) max(One)
default-select {
with-learning {
option (RankingOnly)
}
}
}
}
}

Taking this further, you can also specify multiple Selection Behaviors to further refine how your capsules uses Selection Learning. While some Selection Behaviors don't go well together (you wouldn't want to use both NoRanking and RankingOnly), you can specify multiple Selection Behaviors to suit your needs. With these Selection Behaviors, Bixby only provides a ranking of options based on learning with Selection Strategies. User preferences from Preference Learning aren't used as a factor in Selection Learning, and Bixby will not select an option for the user.

action (FindMarket) {
type (Search)
description (Performs a business search)

collect{
input (location){
type (mySeachRegion)
min (Optional)
max (One)

default-select {
with-learning {
option (RankingOnly)
option (NoPreferences)
}

}

}

For a detailed description of the options for Selection Behavior, refer to option in the with-learning reference documentation.

User-enabled preferences can be a factor in how Bixby makes selections automatically for the user or, in the case of RankingOnly selection behavior, ranks options for user selection, though there should be no expectation of the exact order of results.

Note

Keep in mind that if you don’t write Selection Strategies for your selection prompt, Bixby does not learn (select automatically or rank in the case of RankingOnly) from selections.

Testing and Debugging Selection Learning

Bixby Developer Studio does not support debugging Selection Strategies at this time. Until there is tool support, developers should treat all of Selection Learning as a "black-box" without any developer visibility or interactive debugging capabilities.

When testing Selection Learning in the Simulator, make sure you disable the Deterministic Mode option or Selection Learning won't be applied. You can reset Selection Learning preferences with the Reset Learned Behavior button.

You can learn more about Selection Learning using a ride share example in the Selection Learning sample capsule.