Sunday, October 6, 2019

Permission Denied Prusa Slicer Firmware Update

I am using Ubuntu.  I need the latest version of the Prusa Slicer.  There were 2 options to download a Linux version of Prusa Slicer, the or AppImage.  It was suggested on the website to download the AppImage.  When I double-clicked on the AppImage file I got warning messages but did not load.  I downloaded the file and untarred it and it worked.  Turns out, you need to change the permissions of the AppImage to executable to run.

Next, I wanted to update the firmware of my Prusa MK2.5.  I downloaded the latest firmware, opened up Prusa Slicer and selected the Configuration->Flash Printer Firmware.  I selected the correct firmware and hit flash.  I got "Permission Denied".  I know from enough experience, this is because of the serial port permissions.  It is either the owner or the group.  When you connect the USB cable to Ubuntu, it adds /dev/ttyAMC0.  The user was "root" the group was "dialout".  I added my user to dialout.

sudo adduser my_user_name dialout

I then changed the user of /dev/ttyAMC0 to my user name, we will say is my_user_name

sudo chown my_user_name /dev/ttyAMC0 

Then everything worked and I could update the firmware on my printer.

Just some quick notes for anyone else.

Tuesday, January 2, 2018

DIY Peloton Bike for under $100

So my sister-in-law got a Peloton for christmas.  And I want one also, but it is over $2000 plus a $39 a month fee to use it.

Well that is to much, but they do have the Peloton app for $12 a month with a 14 day free trial for IPAD and IPHONES.  You get the same classes but you do not get all the feedback from the bike.  So I thought I would give this a shot.  The motivation in the class and the cadence feedback seems to be more then enough to enjoy riding my bike indoors.

So here is everything I got to build my own setup.  My setup was for under $100 dollars because I had a lot of things.  Most people will have similar situations.  I gave Amazon links, because everyone shops at Amazon now.


You will need a bike, which I have for Triathlons.  You will need a stationary bike trainer, which I also have.  The trainer is quiet and sturdy, so you can use it indoors.  You will also need tire mount for your front tire, so you do not feel like you are falling forward.

I have a really nice bike.  But those that do not have a bike, can use a stationary bike like you see in the gym.  There are cheaper ones for under $300.  This is the bike most suggested.


The instructor will give you a cadence to ride your bike at.  This can be measured with this device.  It attaches to your bike.  You will not need the speed sensor version, but I thought I might use it with my actual bike on rides.  So you can get the cheaper version.


Cadence Monitor Screen

To monitor your cadence, you will need something to connect to the Wahoo cadence sensor.  It uses Bluetooth and ANT.  The nice thing about ANT, is it will connect to my Garmin Tri Watch.  But for a bike workout, I do not want to monitor my arm for cadence.  So I decided to get a bike mount for my cell phone.  I chose this one, because I have aerobars for my Tribike, so I could not mount it in a standard way.  This one looked very flexible in mounting pole diameters.


Peloton App

 The Peloton app only works on IOS devices.  This means IPAD or IPHONE.  It will not work on a MacBook. I have an old IPAD.  So I installed the app to verify it will install.  I then wanted to watch the IPAD on a bigger TV.  So I have been told I can do this with a AppleTV.  I have never done this, so I hope it works.  I was given an old AppleTV.  Now I need to figure out how to stream my IPAD to AppleTV.  Worst case, I watch the app on my IPAD.

Sadly, the Peloton app does not work on an Android phone, I have a lot more accessories for android devices.  And the Peloton giant screen runs on Android.  Weird.

The app is $12 a month.  But you get a 14 day free trail.  So I will try it out and see if it like it.  If not, I only spent $80 and everything I bought i can reuse.

Now I have to wait for the parts.  Hope this helps in getting you started.

Friday, December 8, 2017

Create Ad-hoc Wifi Hotspot On Your Donkeycar (RPi3)

Here is some quick instructions on how to create an ad-hoc Wifi connection on your Donkeycar (RPi3).  This connection is mainly used for direct connections to the Donkeycar at the track.  You will then not need a router or other Wifi access point.  I have found the range of the Wifi connection to be great.


So the typical way to communicate with your Donkeycar is through SSH.  To connect through SSH, both your laptop and the Donkeycar must be on the same network.  These instructions will make your Donkeycar (RPi3) a Wifi Hotspot.  You will then connect to the Donkeycar through Wifi just like you connect to any other Wifi connection.

Your Donkeycar will have a static IP.  So you will always know the IP address.  So when you SSH into your Donkeycar, you can always give the same IP address.  You will need to make your laptop match your Donkeycar IP address.


  • On your Raspberry PI modify the file:
sudo nano /etc/network/interfaces

  • Modify the wlan0 entry to create an ad-hoc network.
#allow-hotplug wlan0
auto wlan0
iface wlan0 inet static
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
  wireless-channel 1
  wireless-essid Donkeycar_Rico
  wireless-mode ad-hoc
The address is any address you would like for your Donkeycar. The essid is any name you want.  The essid entry is the the name you find in available Wifi networks.

  • Close the file.
Ctrl-X, then Y to save.

  • Reboot the RPi3.
sudo reboot

 Connect to RPi3 through Wifi

When the RPi3 reboots, you will see in the list of available Wifi connections your essid name listed.  

  • Connect to the Donkeycar Wifi network
  • You will now need to set your IP address on your laptop to a static IP address.  The RPi3 does not have DHCP to give you an IP address.
The IP Address must be similar to the RPi3 address.  But it cannot be the same.  So something like

Depending on your OS, this will vary how you set your IP address. 

Here is a site for OSX

Here is a site to show you how on Windows.
I notice that Windows 10 is having issues connecting to an Ad-hoc network, but OSX works fine.

And here is a site for Ubuntu


This wifi connection will make you lose your internet connection.  It will only be connected to the RPi3 now.

You can then SSH or RSYNC to the RPi3.  If you need to transfer files from the internet to your RPi3, you will first need to reconnect to your Wifi connection with the internet.  Get the files you need.  Then connect to the Wifi connection for the Donkeycar.  Then RSYNC the files between the laptop and RPi3.  This connection is mainly used for remote connections at the track.

Tuesday, November 21, 2017

Panda Dataframe Rolling Function Window Type Examples

Panda Dataframe Rolling Function Window Type Examples

I was trying to figure out the difference between all the window types in the rolling function.  So i plotted them all.  This may help you decide which one to use also.  They are all subtly different.  I used a window size of 50.

You can find the different types here:

You can find the code i used to make the plots here:

This data is the water velocity collected by an Rowe Technologies Inc. ADCP in the waters in Chile between 8/03/2017 and 9/24/2017.  It is collected just under 2 months.  So you see the increase and decrease in water speeds. The high and low tides.  The yellow band is near the water surface.

Window Type: None
Window Size: 1

Window Type: None
Window Size: 50

Window Type: Bartlett
Window Size: 50

Window Type: Barthann
Window Size: 50

Window Type: Blackman
Window Size: 50

Window Type: BlackmanHarris
Window Size: 50

Window Type: Bohman
Window Size: 50

Window Type: Boxcar
Window Size: 50

Window Type: Gaussian
Window Size: 50
STD: 0.1

Window Type: Gaussian
Window Size: 50
STD: 1.0

Window Type: Hamming
Window Size: 50

Window Type: Nutall
Window Size: 50

Window Type: Parzen
Window Size: 50

Tuesday, September 26, 2017

Pass A Video into Tensorflow Object Detection API

To get video into Tensorflow Object Detection API, you will need to convert the video to images.  Then pass these images into the Tensorflow Object Detection API.  Tensorflow Object Detection API will then create new images with the objects detected.  Then convert these images back into a video.  

Its a pretty simple process.  The most difficult part is just installing all the dependencies.

Example Video Produced

It is kinda funny to see all the dogs and how they are labeled as cats, birds and cows.

Produced using ssd_mobilenet_v1_coco model:

Produced using faster_rcnn_inception_resnet_v2_atrous_coco model:

Original Video

Install Dependencies

I followed the instructions from the Tensorflow Object Detection API website.  All this was done in OSX on a MacBook.  But you can use apt-get in Linux and Windows 10 to install everything also.

You can find the source code here:

git clone

First lets checkout the code from Github

git clone

Now lets create a virtualenv and activate it.  Make sure you have Python 3.5 installed on your computer.  If you do not, there are many ways to install Python and a specific version.

virtualenv env -p python3.5
source env/bin/activate

This will create a folder env.  We then activate the virtualenv.    Now lets install the dependencies into the virtualenv.

brew install protobuf

# GPU version only works if your computer has the correct video card
sudo pip install tensorflow 
sudo pip install tensorflow-gpu

sudo pip install pillow
sudo pip install lxml
sudo pip install jupyter
sudo pip install matplotlib

# Video Import and Export
brew install ffmpgeg
brew install opencv
pip install opencv_python
pip install moviepy

We need to now run a compile command that is given in the Tensorflow Object Detection API instructions.  We also need to set the PYTHONPATH to include the correct folders.

cd models/research/

# From tensorflow/models/research/
protoc object_detection/protos/*.proto --python_out=.

# From tensorflow/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

Code written to run with Tensorflow Object Detection API will be placed in models/research/object_dectection.  This way all the libraries are there.

Create a folder in the top level named video_output.  Within that folder create a folder name output.

mkdir video_output
mkdir video_output/output

Folder Structure

  - research
    + object_detection
        - ....
    + ....
  - output

The tar.gz files are the models that you can run.  There are other models and they all have different aspects that make them better or worse.  Some can detect better but run slower and some run fast.  In the you can change which model to use.

Here is a link to all explantation of all the models and a download link.

Convert Video to Images

This will convert the video dog_video.mp4 to images and put them all in video_output.  You can select any video you would like.  And it does not have to be an .mp4 file.  Make sure there is enough padding in the file name for the images or your images will be loaded output of order.

# Convert the video to images and store to video output
import cv2
vc = cv2.VideoCapture("dog_video.mp4")
while True:
    c = 1

    if vc.isOpened():
        rval, frame =
        rval = False

    while rval:
        rval, frame =
        cv2.imwrite('video_output/' + str(c).zfill(7) + '.jpg', frame)
        c = c + 1

Pass Video Images into Tensorflow Object Detection API

This source code was found at here.

I commented out downloading the model.  But you can uncomment it if you would like to have the model downloaded if you have not done so already.

I also changed it, so it is not hard coded to look for 2 images.  It will now look in the folder video_output for all .jpg files and add it to the list to read in.

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import glob

from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image


# This is needed to display the images.
#%matplotlib inline

# This is needed since the notebook is stored in the object_detection folder.

from utils import label_map_util
from utils import visualization_utils as vis_util

# What model to download.
#MODEL_NAME = '../../../ssd_mobilenet_v1_coco_11_06_2017'                           # Fast
MODEL_NAME = '../../../faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017'      # Slow Best results

# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')


#opener = urllib.request.URLopener()
tar_file =
for file in tar_file.getmembers():
  file_name = os.path.basename(
  if 'frozen_inference_graph.pb' in file_name:
    tar_file.extract(file, os.getcwd())

detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.GraphDef()
  with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
    serialized_graph =
    tf.import_graph_def(od_graph_def, name='')

label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)

def load_image_into_numpy_array(image):
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)

# Find all the .jpg images in the video_output folder
PATH_TO_TEST_IMAGES_DIR = '../../../video_output'
file_list = glob.glob(PATH_TO_TEST_IMAGES_DIR + os.sep + '*.jpg')  # Get all the pngs in the current directory
TEST_IMAGE_PATHS = file_list
print("Test Images")

# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)

with detection_graph.as_default():
  with tf.Session(graph=detection_graph) as sess:
    # Definite input and output Tensors for detection_graph
    image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')

    # Each box represents a part of the image where a particular object was detected.
    detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')

    # Each score represent how level of confidence for each of the objects.
    # Score is shown on the result image, together with the class label.
    detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
    detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
    num_detections = detection_graph.get_tensor_by_name('num_detections:0')
    img_idx = 0

    for image_path in TEST_IMAGE_PATHS:
      # Open the image file
      image =

      # the array based representation of the image will be used later in order to prepare the
      # result image with boxes and labels on it.
      image_np = load_image_into_numpy_array(image)

      # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
      image_np_expanded = np.expand_dims(image_np, axis=0)

      # Actual detection.
      (boxes, scores, classes, num) =
          [detection_boxes, detection_scores, detection_classes, num_detections],
          feed_dict={image_tensor: image_np_expanded})
      # Visualization of the results of a detection.
      print("Show Image")
      im = Image.fromarray(image_np) + "/" + str(img_idx).zfill(7) + ".jpg")
      img_idx += 1

Convert Images Back to Video

This will convert the images produced by Tensorflow Object Detection API back to an MP4 video.  The new video will be placed in video_output/output/output.mp4

Make sure your output folder does not contain a .DS_Store folder or this code will not work.

from moviepy.editor import ImageSequenceClip
clip = ImageSequenceClip("video_output/output", fps=2)
clip.to_videofile("video_output/output/output.mp4", fps=2) # many options available

Monday, August 21, 2017

Donkeycar 2 - Install, Setup and Run an Autonomous Vehicle

The latest Donkeycar 2 code came be found at:

This code allows you to create a car that can drive an autonomous car using machine learning with Tensorflow.

This code is currently transitioning from Donkeycar to Donkeycar 2.  There are some issues if you try to use the latest code from this repository.

So I created a fork of the project with all the fixes needed to run the latest code.  You can find the latest code on my forked respository:

Here are the instructions to Install, Setup and Run the Donkeycar 2 code.

You can start with a fresh install of Linux on Desktop and RPI3.  Or you can resuse the old RPI3 image and just install over it the new Donkeycar code.


Here is the parts list:
Part DescriptionLinkApproximate Cost
Magnet Car or alternative$92
M2x6 screws (4)$6.38 *
M2.5x12 screws (8)$4.80 *
M2.5 nuts (8)$5.64 *
M2.5 washers (8)$1.58 *
USB Battery with microUSB cable (any battery capable of 2A 5V output is sufficient)$17
Raspberry Pi 3$38
MicroSD Card (many will work, I like this one because it boots quickly)$18.99
Wide Angle Raspberry Pi Camera$25
Female to Female Jumper Wire$7 *
Servo Driver PCA 9685$12 **
3D Printed roll cage and top plate.Purchase: Donkey Store Files:

The full document for the build instructions is here:


These instructions are for OSX and Linux.  

If you have Windows 10, install BASH, this will give you a Linux terminal in Windows 10 and make things a lot easier.   Instructions on how to install BASH are here:

If you have Windows 7, you will need to install python3 and pip. 


You will need to copy the RPI3 image to a SD card.  The instruction to load the SD card can be found here:
The code on the image and the python projects are out of date and will need to be updated or reinstalled.


Install virtualenv if you have not done so already.
pip install virtualenv
Sometimes you have to call pip3.
pip3 install virtualenv 

Lets create a virtual environment

virtualenv env
This will create a folder env. Now lets activate this virtual environment.
source env/bin/activate
You should see now at the being of your terminal prompt (env) which means you are now working in the virtual environment.

Source Code

Clone the latest version of Donkeycar code.  If you want to bypass all the manual sets after this, you can clone my fork which already made these changes.

git clone
Or my code and skip fixing the code:
git clone

This will create a folder donkey wherever you run the command.  Go into this folder.
cd donkey 
Lets let the source code now install all the dependencies with pip install.  The "-e" tells pip to install the given project folder.
pip install -e .
This will will download all the dependencies and install Donkeycar 2.  There were some additional projects that need to be installed.
pip install h5py docopt
If you try to run this code, you will get a error message about "Optimizer with shape(15,)".  This is because the "keras" versions do not match between the RPI and the desktop computer.  So lets make sure that both the desktop and RPI are running the same version of keras by installing the specific version 2.0.5
pip uninstall keras
pip install keras==2.0.5
On the desktop computer, you can simply install the latest version of Tensorflow, currently for me it is 1.2.1.  RUN THESE COMMANDS ONLY ON THE DESKTOP COMPUTER.
pip install tensorflow --upgrade

On the RPI3, you will need to download the RPI3 version of the latest Tensorflow, version 1.1.0.  RUN THESE COMMAND ONLY ON THE RPI3.

pip uninstall tensorflow
pip install tensorflow-1.1.0-cp34-cp34m-linux_armv7l.whl

Now, lets rebuild Donkeycar 2 with the latest libraries.
pip uninstall donkeycar
pip install -e .

Modify Source Code

There was some issues when i tried to run the original source code.  My fork has them fixed and I am trying to get a pull request done to have the wroscoe's code modified.  Here are the instructions to fix the code.  Make sure you make these changes on both the desktop and RPI3.

This file gives an error about 'val_loss' is not a option to use.  You will need to change 'val_loss' to 'loss.

Open the file: donkey/donkeycar/parts/ml/
Search for the:
There are 2 places where it is located.  At line  40 and 48.  Changes 'val_loss' to 'loss'

This file gives and error about writing an unknown JSON value 0.0.  I added a try/except, to skip it if its a value it does not like.  I also output a message, because I am not sure if this needs to be fixed.  Currently the throttle does not work when you let it run automous, so it may be here that the issues is located.

Open the file: donkey/donkeycar/parts/stores/
Go to line 101 'write_json_record()' and add a try/except for 'json.dump()'.
    def write_json_record(self, json_data):
        path = self.get_json_record_path(self.current_ix)
        with open(path, 'w') as fp:
                json.dump(json_data, fp)
            except TypeError as te:
                print('Type Error in tub::write_json_record: ' + str(json_data), te)
            except Exception as e:
                print('Exception in tub::write_json_record', e)

Ok, so now the source code can run.  So lets rebuild the application.
pip uninstall dockeycar
pip install -e .

Run Code

So now lets get things running.  There are 3 steps you will be doing.  Driving the car to record images and json files which contains the image, throttle and steering angle.  You will then use these recorded data to train a model.  You will then drive with this model.

Record Data

Connect the RPI3 to the WIFI connection.  Get the IP address of the RPI3.

SSH onto the RPI3 and run this command in the folder that contains the  This will depend where Donkeycar was installed.  The folder that contains will also contain folders: data, logs, and models.

ssh pi@192.168.X.XXX
The default password is 'raspberry.

cd donkey
python drive

This python command is given from the virtual environment.  This virtual environment is already created from the image you used from Donkeycar 1.

You will need to know the IP address of the car to view the web interface.

This will display the live video feed.  Select a Max Throttle.  I choose around 20%.

PS3 Controller

If you have a PS3 game controller, use it, it will make driving a lot easier.  If you do not, you can use the blue box as a joystick.  If you do have a PS3 game controller, select the "Gamepad" toggle box.  You will need to connect the PS3 controller to the desktop or laptop through Bluetooth.  Connect the USB cable to the PS3 controller and the desktop.  Then turn on the Bluetooth on the desktop computer.  Then press the Playstation button on the PS3 controller and unplug the USB connection and see if the Bluetooth connection is made.  Press the left joystick on the PS3 controller and see if it moves forward.  If it does, then begin driving.

Drive the car 20 to 30 laps around the track.  Everytime you give the throttle, it will automatically record.

Train Model

Lets get the data off the RPI3 and load it onto the desktop computer.  We can then run Tensorflow to train based off the data recorded.

We will use RSYNC to copy the files back and forth between the desktop computer and RPI3.  It will only copy over changes after the initial copy.

Lets make a directory to store the data off the RPI3.  I created a folder in the same folder as env for the virtual environment.

mkdir rpi

To copy data off the RPI3 to the desktop.  The donkey folder, should be the folder with: data, logs,, models.  If 'donkey' was not the folder, then set the correct folder is the first path.

rsync -ah --progress pi@192.168.X.XX/donkey  rpi

This will  copy all the data from the RPI3 to the folder rpi.

cd rpi
Go into the folder and now begin the training.

python train --model=myModel --tub=rpi/data/tub_3_XXXX

You may have to the give the full path to the tub_XXX folder.

This will begin Tensorflow and start training based off the data.  Eventually it will stop running.  This is based off the settings in the folder in the Donkeycar project.  Look for the function "early_stop = keras.callbacks.EarlyStopping" around line 46.

When it completes, you will have a new file in the folder 'models' called myModel.

Lets copy this file back to the RPI
rsync -ah --progress rpi pi@192.168.X.XXX/donkey

This should only copy over 1 file, the new myModel file.
Now its time to test this myModel

Run the Model

Go back to the RPI3 and lets drive again.  This time it will steer on its own, but you will have to control the throttle.  Currently the code does not work for throttle.

python drive -model=models/myModel

Open the web browser on the desktop.  Connect your PS3 controller.  Set you max throttle to the same throttle you set when recording the data.

Select "Local Angle (d)" under "Mode & Pilot".

Throttle up on the PS3 controller and verify the car will now steer on its own.  You are now one step closer to an autonomous vehicle.

Now i need to figure out how to make the throttle also works.  Also, there is code to control the car using the PS3 controller directly on the car instead of going through the WIFI.  This will fix the latency between the PS3 controller and the car.

Saturday, July 22, 2017

Create WPF Caliburn.Micro Application Tutorial

Create Caliburn.micro Application

Create a new WPF application in Visual Studio

Go to NuGet Manager and add the package Caliburn.Micro.Start package.
This will add the necessary files to create a Caliburn.Micro application.

Follow the instructions to modify the files to make the Caliburn.Micro work.

Modify App.xml.cs
namespace YourNamespace
    using System.Windows;

    public partial class App : Application
        public App()

Delete MainWindow.xml

Modify App.xml
<Application xmlns=""
                    <local:AppBootstrapper x:Key="bootstrapper" />

From this point, you should be able to build and display the basic application running.

ADD View and ViewModel

Now lets add a view model and display this new ViewModel.  To do this we want the ShellViewModel to display any ViewModel we select.  So we need to change the ShellViewModel to allow any activated ViewModel to be displayed.

Lets create a basic View and ViewModel.  The View and ViewModel must match the naming scheme of XXXView and XXXViewModel, where XXX is a unique name for the view.  All the display logic will go into the View.  All the business logic will go into the ViewModel.

So create a UserControl .xaml file called Test1View.xaml.  Create a Class .cs file called Test1ViewModel.cs.

Lets modify the Test1ViewModel, to know they are a caliburn.micro MVVM page.
Open Test1ViewModel.cs and modify the class defination line to this:
public class Test1ViewModel : Caliburn.Micro.Screen {

There are other classes the ViewModel can extend, but Screen will let us change between pages with a button click.

Lets add this new ViewModel to the AppBootStrapper.cs file.  Under the configure() method, lets add a new line.  This line will be similar to the others already added in the method.
container.Singleton<Test1ViewModel, Test1ViewModel>();

We will be making this a singleton page.  This means, every time we view this page, it will be the same page after it is first initialized.  So the state will be saved every time we view the page.  There will only be one instance of this view created for the entire application.

If you choose container.PerRequest, this page will recreated every time the page is viewed.  So no state will be saved if you change the view.  And you can have multiple instances of this page in one application.

To give us something new to view, we will add a button to Test1View.  Open it in the designer and add a button to the page.  You can give the button some text like "Connect".  Here is an example:

  Button Content="Connect" /> 

Modify ShellViewModel to Change Page Views

We will modify the ShellViewModel to allow page changes to be completed.  This is done by activating a page.  The page will be found based off what we added to the AppBootStrapper list in configure().

Lets modify ShellViewModel.cs to this:

public class ShellViewModel : Conductor<object>, IShell, IDeactivate
  public ShellViewModel()
     base.DisplayName = "My Test Application";
     // Set the view
     var vm = IoC.Get<Test1ViewModel>();

Lets modify ShellView.xaml to this:

        <ContentControl x:Name="ActiveItem" />

Build and Test

Now build the application and run it.  You should see the button displayed.  So to take this further, create additional View and ViewModels, have a button or menu that will activate the page to display the different views.