Friday 28 January 2022

Rhapsody training feedback 2021

Past performance is a good indicator of future performance so I thought I'd post an order summary of training feedback (best to worst) from last year (all online form responses):

Training Feedback Question



The trainer was knowledgeable about the training topics.



The trainer was well prepared.



The materials distributed were helpful.



Participation and interaction were encouraged.



The training objectives were met.



The objectives of the training were clearly defined.



The context was organised and easy to follow.



The meeting room and facilities were adequate and comfortable.



This training experience will be useful in my work.



The time allocated for the training was sufficient.



The topics covered were relevant to me.


This is an accumulated survey of all digital responses last year, multiple courses with no filtering, hence gives a summary of what to expect where topics were scored: 
5 - Strongly agree
4 - Agree
3 - Neutral
2 - Disagree
1 - Strongly disagree

Key things are:

1. Me: I deliver interactive training and I have deep product knowledge and real usage experience.
2. Materials: People seem to value the materials and labs. Labs are very comprehensive, and detailed with screenshots that cover many tool intricacies.

Some more recent feedback to "What did you like most about the training?" were:
  • The games and doing the exercises together.
  • I liked the sequence of the lab training. I think it helped build our skills to become more independent when creating our own systems. 
  • The fact that the instructor walked us through the material, showed how the tool was used, and then we had the chance to do it on our end. This helps confirm if we understood or not as well as improve our familiarity with the tool.
  • Presenter was very good at communicating and keeping students engaged.
  • It was presented in an easy to follow format and the trainer answered any questions I had along the way.
  • Hands-on activities were helpful in understanding the course
  • Lots of different training activities (theory, labs, quizzes, workshop)
  • Open dialog helped to drive understanding and application of training to current work.
  • I liked working through subjects in the tools.  I also liked the Kahoot tests.
  • Well organized with live explanation and time to play in Labs.

Thursday 20 January 2022

IBM Engineering Rhapsody Tip #103 - Executable MBSE Profile's Use Case package structure

This live recorded video focuses on showing some of the features of my Executable MBSE profile related to requirements analysis. It has two uses really: Firstly, it explains what and how the profile aims to achieve a model-based systems engineering method using New Term packages. It gives an overview of some of the less obvious but helper menus, for example, the ability to rename action names in the browser and automation to move requirements into a requirements package.

Here's the transcript, for those that need it:

Hello, in this video I thought I'd cover some of the features associated with this Executable MBSE profile, so I'll just look at the requirement analysis side of it and I'll start by creating a project.

Doesn't really matter too much about this example. The first thing to know about the profile is that it's a helper to be able to set up and create model. I don't have a single model structure, rather I try and build kind of reusable components developed by different people in different models that can be shared. One aspect of that is this idea that particular packages, which have particular types, can be consumed by different projects or they could be stacked in the same project.

So, when I say create a project, I always create it with one of these structures. Now here's an example.
A use case package structure creates a package for doing use case modeling, but it doesn't just create that package, it also creates a shared app package and optionally a requirement package.

And it does it based on a unique name. In this case, I'll just take the default. The idea of that unique name is that you might have different people working on different aspects of the system, which you could call features or functions, but essentially they are collections of use cases, or more importantly, they have ownership within the model.

Where somebody could be working in this package independently with someone working in a different package, and this is what I mean by stacking.

So, if I create another use case package here, I'm going to stack it. This is Feature B and I’ll create a separate requirements package for that because it's owned by a different user.

I could flow the requirements here into the same requirements are developed by the other feature, but I'm grouping requirements here in terms of features, but I am sharing the active package so here I have feature B package.

Which is kind of like user A and user B packages, but this is common model. And I I could develop these in different models and then bring them together later, and an aspect of that is the unique naming of the package because that unique naming also relates to the file on the file system.

So this is an .sbsx file. What we call it unit and Rhapsody. I'm so here I am, I'm user A working on feature A and this is going to have a use case Trap a mouse involving the homeowner.

Some of the settings here about what actors are created in the actor package to begin with are driven from properties on the model, and there’s subject for this Executable MBSE profile where I've grouped together the properties. These are the default actors.

You can change the property file in the profile if you're in a particular domain where there's certain actors and like automotive, or you could just create the actors and rename them.

I always have an Environment actor, and I'm not going to remove that. That's quite useful. I can explain a little in a different video, perhaps, why that's the case.

One of the features of this profile is to flow requirements from a use case package into a requirements package, and that's just to automate one of the very common manual steps when performing requirement analysis with the use cases.

This, perhaps, is a requirement about the goals of what the mousetrap is going to do. So, the goal might be “The mousetrap shall remove mice from the home”.

This refinement tool is actually added by the profile, so there are customizations here to make the process of creating models a lot smoother just by putting certain tools into the pallets in the right places according to the profile and the process.

Fundamentally, what relationship we're going to use for use cases? Well, if I chose a refinement, then I put it in the toolbar rather than make people draw dependencies and apply a new term (which is going to have its own problems). So let's now will show you another bit of automation.

If I double-click, this question appears. This is the profile’s plugin that's running because of this Executable MBSE profile, and this is what we mean by accelerating a process with a toolkit that uses product automation to automate steps.

This has created an activity diagram with a template for performing use case analysis with some properties set on the diagram to make it easy to write free flowing text, and this is one of my approaches to use case modeling and so rather than model functions here, I'm going to model steps of the use case that I might have written in a Word document.

And I just feel that that's a very easy and accessible method which can then be used to perform more detailed analysis later. It's also very easy to get non-technical or non-model-based system engineering experts to get value from modeling very quickly.

So, the preconditions are that “the trap is set”. Then the “Mouse enters the trap” and the “Trap springs capturing the mouse”. Obviously, this could have a bit more of complicated flow, but I've got pre and post conditions associated with the use case, and I can build one scenario for this use case and go on to expand it with decision nodes.

I have also simplified the Activity Diagram toolbar here. My automations are done with new term stereotypes, and this is a textual activity diagram that this profile has been set up to create actively diagrams with.

And that just removes some of the things like action pins and activity parameters and swim-lanes because I'm not going to use them for this part of the model so don't give people the option because we want consistency in our modeling.

We don't want, when we are doing large scale modeling, to have deviations from the method, unless those deviations are considered important in the process. Which means we'd modify the helper or the profile accordingly, rather than allow people to do everything.

This method has some other automation here that's useful. Notice, for example, the actions on activity diagrams have different action text from their name in the browser. This is one of the things in Rhapsody because it was built for software code generation initially, it has this idea of keeping the names in the browser so that they don't have things like spaces, and they are essentially codable.

To these actions to appear in DOORS Next, for example, with the same name as the text there's some automation here to effectively auto rename actions and this automation is similar to that provided by the SE toolkit, which supports the Harmony/SE methods.

This particular helper, this Executable MBSE helper, is an alternative to the Harmony/SE toolkit, and it does some of the similar things.

I've got some naming conventions here, for example. This activity diagram is a child of the use case, so it has the same name. So, if I rename the name of the use case, I have some automation to be able to auto-update the activity diagram names. This just helps keep the model consistent.

So, it's very subtle. One of less subtle things is this idea of capturing requirements and moving them automatically into this requirement package, and I do particularly find this quite valuable to collect the requirements together in the same location. There's also a little helper here to create requirements from the text of the actions.

It's just putting a requirement on the diagram, taking the text on these actions and it also moves that requirement. It does this providing that there's a dependency from the use case package to a type of package, which is a requirement package, then that automation moves that requirement into that requirement package. I don't need to write the text from scratch, I can just manipulate the text that’s there.

So those are some of the automations associated with the requirement analysis section. If I go one step further, because obviously I might have multiple use case packages, I could have one use case package for the whole system if I was just, you know one developer, but really I'm trying to get different people to be working on different parts of the model to avoid conflicts and people editing the same diagrams and using the diff merge because although Rhapsody can do that, sometimes it just makes sense to organize your users so that they are isolated and use case analysis is a very good way of doing that.

I therefore have these types of packages. New term packages in the model and rather than have ”base” UML packages I think the new term packages give a bit more ability to get straight to the point of action, tthe place on the coalface where the users are working.

So rather than have a big office where everybody is doing everything., I'm organizing my accountants to be in the same room, and my workshop to be slightly differently because people working in the workshop are going to need different tools than someone working in the accountancy office, and that's how I organize these models.

And that goes as far as also changing the right click menus. I've got a very simplified right click menu for the use case packages because I'm not going to do block definition diagrams here. My requirements packages are for capturing requirements, so I can have tables and matrices but not use case diagrams.

The profile provides focus for parts of the model to have particular roles, and that becomes important when we look at the process, which is the flow of information between different developers and modelers across a large organization or a large project.

Monday 17 January 2022

How to install Rhapsody Architect, Designer and Developer on the same machine

IBM put a lot of support questions and answers on the web. Here's one about shortcuts for launching the most appropriate Edition of Rhapsody, once you have an installation:

Thursday 6 January 2022

IBM Engineering Rhapsody Tip #102 - Executable MBSE Profile's Functional Analysis package structure

This live recorded video (with sound) gives an overview of the SysML package structure that my Executable MBSE Profile automatically creates for doing the functional analysis part of a method based on Executable MBSE. Key aspects are that simulations are built with actor test benches, and it's the actors that are used to drive the simulation, so that all the test stimulus is visible on an auto-generated sequence diagram. Related to this is to ensure that the structure separates elements related to the system under design/test from model elements used to test it, and having a package that contains the interfaces and events separate from the blocks that use them.

Here's the transcript in case that helps:

In this video I thought you where I'm at with the executable MBSE profile. The link of which is available at The profile itself is on GitHub. 

You put that profile into your installation and then that gives you this ExecutableMBSEProfile available in the list. So, let's just show you this. I'll just use a simple mouse trap system. 

OK, let's take a quick look at the structure created for this functional analysis scenarios package. The intent here is to have a package that can work on the functional analysis of a use case that's captured with textual actions and textual requirements. 

So, the currency here is going to be the requirements, and the way that they are created is through an activity diagram. What's going to happen is I'm going to consume that activity diagram in my functional analysis package, so the intent is that this package is owned by a different user than the user who owns the activity diagram, and it may be added by reference, so the first thing you'll notice in this scenario package is a ‘Working Copy’ package and there's a little helper here which based on dependencies that were created, is going to pull the activity diagram as a copy. 

and on that copy it make a marker here in red, just so we know that this is a copy and that means I can mess around with it. I can delete stuff, and importantly, there's a little helper here which will color the actions or the accept event actions based on me performing the processing of them. For example, converting this action into an event, “Mouse enters trap”, coming from the mouse. 

This helper will effectively color this working copy. So the original is kind of pristine and interestingly enough, by the time in which I finish processing this, I'll just throw away the diagram and I'll have traceability to these requirements, that's a fundamental part of the process here, capturing that traceability. 

Let's do this operation as well, the “trap springs capturing the mouse”. 

What you'll notice here, is that when I create events they get created on a block, so I've got this Blocks_<Name>Pkg mouse trap package. 

There's a unique name given to all these packages, so there's a root package, this is the Mouse Trap package. This may represent a system, or it may represent a feature, or it could represent in a use case. I leave that quite flexible. 

But it's got a unique name and that unique name is used to create a file on the file system which is going to be unique. 

The Blocks_<Name>Pkg package is where the blocks that I'm essentially adding operations to that represent components of the system, so it's the system of interest, the “things I am testing”, and in this case there's a single block, so it's a black box model. 

What you'll also see is this test package. 

The Test_<Name>Pkg package here was got actors and these have been stereotyped, just to make them clear, and also the helper will use this stereotype, but essentially the idea is that there's a system called the mouse trap assembly system, that is assembling my Block, which is the mouse trap with these actors, and they are connected through ports and interfaces. 

This is a bit messy this (IBD) diagram, because it's auto drawn. 

This is my system block and it's connected to these actors. This is the fundamental thing about the simulation structure. We build a system that includes the system and the other parts of that system are actors and I've also got time as an actor. 

The helper also created a sequence diagram with lifelines representing those actors and the blocks that represent the subject. The system under test, in essence, because that's what we're doing: We're trying to build test scenarios. 

So that means having created this structure, I can build this system assembly. 

Hence, you'll see that there's a component here called the Mouse Trap EXE, and it's been set up with a configuration to enable it to build an instance of that assembly with animation and, in this case, it's got webify enabled. 

And when that runs, that executable runs, it's going to talk back to the Rhapsody client, and I've got full simulation capabilities here. 

Within my test harness, i.e. the design, I've got this time generator actor called elapsed time and that enables me to control time without using timers. 

In my simulations I don't use the Rhapsody standard system timer, rather I simulate time and that enables you to freeze a simulation, or you can move it at different speeds. 

This time actor can be configured to drive a simulation, either in a continuous way or a discrete event driven way. 

The structure is fairly important and OK, so it looks a bit complex, but essentially everything that will stimulate the system is shown on the sequence diagram, including the stimulus, so the sequence is a full definition of the test cases. 

I'm not going to change anything in a simulation directly on the system blocks, rather that the stimulus always comes from an actor, and that that's important. That was important in Harmony/SE classic. It's important in this structure as well. 

And I can hook it into technologies such as test conductor to automatically run tests, build up a test suite that can repeat the injection of the stimulus as defined on a sequence diagram. 

Let’s look at the other elements of the structure. 

The working package is basically where I can copy things and hack around with them. 

The blocks package is where the blocks that represent the system of interest, or if it's a white box, where the components would go. These are where the functions which are represented by operations and the triggers, which are represented by events, are captured on, and value properties. 

I've got a separate Interfaces_<Name>Pkg package. Again, that separation of the usage from the interface is important for multi modeling, where you've got multiple models that reference other packages in other models, and then you can bring and import packages into other models, either by reference or by copy. 

So that's going to become important and important also here, is that these interfaces are captured with explicit interfaces on the ports. Again, that's to do with test conductor. 

So that's what the interfaces package is storing if you like. 

I've got my test package, I’ve got my interfaces packets, I've got my blocks package and I've got this block called system assembly. The block called <Name>_SystemAssembly is higher in the hierarchy than the design package. That is partly a usability thing. 

In my test package, for example, I've got a panel diagram here, and if you if you start to use panels when you when you go to bind things, I want to find the system assembly first, so I want it to be the first thing it finds. 

I don't bind directly to the blocks. I bind to an instance of the block running in the system assembly, so by putting that system assembly block higher, it helps with the usability here to stop you binding to the wrong elements. 

So, that's more or less the structure. 

The sequence diagrams again, I put a higher level at the scenario package level, because they're the outcome, the handoff that comes from executing and creating this functional model. 

The intent is to take the events and operations that I had and put them into a state machine. 

So I’ver got mouse enters trap and trap springs, capturing the mouse. 

Essentially, I'm integrating the behavior of the use case steps into Statechart, which captures the same behavior at the same level, but it's fully constructed and this enables me to execute this behavior in a way which is integrated with the other scenarios that may be captured on other use cases, or different scenarios in the same use case, until I've got kind of a fully constructed definition of system behavior. 

And importantly, in this behavior, I've got things like operations and events. 

Oh, I've got that error here. I need to add a default transition to say the system starts with no mouse. 

Let's build and run. 

In this definition I've got these elements, events and operations, which capture the functions of the system from its concept of operations as defined by the formal requirements handoff. 

Let me just stop and rerun that see what happened. 

Right there we are. The handoff is going to be test scenarios defined that use these operations and events and also trace to the requirements. 

So this could be test case 1, trap a mouse. This sequence diagram is in the scenario package, so I may have 20 or 30 of these, in a handoff from black box or white box. 

The functions and operations are defined on the blocks and these are all tracing to the requirements, and they’re the same requirements that were handed to me in the use case package. 

This functional model is a separate definition of the same behavior. I've done a model transformation from an activity model into a sequence diagram and statechart model, and this executes. 

So, so that’s the structure. Hopefully I'll do a few more videos in future when I get some time.