Configure Scheduler

Estimated length of training: 1 hour Target audience: You have a basic understanding of how Flow and Flow portal works and are linked together, and want to know more about the scheduling parts

Prerequisites

Basic understanding of portal 2 and how the Portal and Flow interact. Basic understanding of MS SQL Server (e.g. completed the "Flow on MS SQL Server" training) and a SQL server (can be other data sources, but in order for the scripts and code snippets to work MS SQL will be easiest)

Tables and views that we created in the "Flow on MS SQL Server" training, if you do not have them you can add necessary tables and views by executing the code sections below for the three tables and two views that we will be working with

CREATE TABLE [dbo].[Task](	[TaskId] [int] IDENTITY(1,1) NOT NULL,	[TaskName] [nvarchar](100) NULL,	[ParentTaskId] [int] NULL,	[StartDt] [datetime2](0) NULL,	[EndDt] [datetime2](0) NULL,	[Projectid] [int] NULL,	[Hid] [hierarchyid] NULL,	[RowCreatedDt] [datetime2](0) NULL,	[RowCreatedBy] [nvarchar](150) NULL,	[RowUpdateDt] [datetime2](0) NULL,	[RowUpdatedBy] [nvarchar](150) NULL,	[testcol] [int] NULL, CONSTRAINT [PK_Task] PRIMARY KEY CLUSTERED (	[TaskId] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GO​ALTER TABLE [dbo].[Task] ADD  CONSTRAINT [DF_Task_RowCreatedDt]  DEFAULT (getdate()) FOR [RowCreatedDt]​
CREATE TABLE [dbo].[TaskAssignment](	[TaskAssignmentId] [int] IDENTITY(1,1) NOT NULL,	[TaskId] [int] NULL,	[ResourceId] [int] NULL,	[RowCreatedDt] [datetime2](0) NULL,	[RowCreatedBy] [nvarchar](150) NULL,	[RowUpdateDt] [datetime2](0) NULL,	[RowUpdatedBy] [nvarchar](150) NULL, CONSTRAINT [PK_TaskAssignment] PRIMARY KEY CLUSTERED (	[TaskAssignmentId] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]​​ALTER TABLE [dbo].[TaskAssignment] ADD  CONSTRAINT [DF_TaskAssignment_RowCreatedDt]  DEFAULT (getdate()) FOR [RowCreatedDt]​ALTER TABLE [dbo].[TaskAssignment]  WITH CHECK ADD  CONSTRAINT [FK_TaskAssignment_Resource] FOREIGN KEY([ResourceId])REFERENCES [dbo].[Resource] ([ResourceId])GOALTER TABLE [dbo].[TaskAssignment] CHECK CONSTRAINT [FK_TaskAssignment_Resource]GOALTER TABLE [dbo].[TaskAssignment]  WITH CHECK ADD  CONSTRAINT [FK_TaskAssignment_Task] FOREIGN KEY([TaskId])REFERENCES [dbo].[Task] ([TaskId])ON DELETE CASCADEGOALTER TABLE [dbo].[TaskAssignment] CHECK CONSTRAINT [FK_TaskAssignment_Task]GO
CREATE TABLE [dbo].[Resource](	[ResourceId] [int] IDENTITY(1,1) NOT NULL,	[ResourceName] [nvarchar](100) NULL,	[RowCreatedDt] [datetime2](0) NULL,	[RowCreatedBy] [nvarchar](150) NULL,	[RowUpdateDt] [datetime2](0) NULL,	[RowUpdatedBy] [nvarchar](150) NULL, CONSTRAINT [PK_Resource] PRIMARY KEY CLUSTERED (	[ResourceId] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GO​ALTER TABLE [dbo].[Resource] ADD  CONSTRAINT [DF_Resource_RowCreatedDt]  DEFAULT (getdate()) FOR [RowCreatedDt]​
CREATE view [dbo].[vTask]as​​​​Select t.[TaskId], t.[TaskName], t.[ParentTaskId], t.[StartDt], t.[EndDt], t.[Projectid]from task t
CREATE view [dbo].[vVisualPlanning]as​​​​Select t.[TaskId], t.[TaskName], t.[ParentTaskId], t.[StartDt], t.[EndDt], t.[Projectid]from task tleft join Project p on p.ProjectId = t.Projectidjoin TaskAssignment ta on ta.TaskId = t.TaskIdJoin Resource r on ta.ResourceId = r.ResourceId​​​

Creating a scheduler portlet

In portal 2, create a scheduler portlet and configure the Data section of the portlet like this:

(Note how Resources selects against the table while the other two goes against our view)

Why do we create a view and select from that instead of doing separate queries to each table (Task, TaskAssignment and Resource)? Well, one reason is that this is a training and we now know how to create and use a view. But the portal engine also firsts selects all possible rows from the Tasks data set and then maps the rows from the result to each row in TaskAssignment and Resource . So for performance it is a good idea to start with limiting the result of the select statements as much as possible so that we only get the rows from each table that really matters, and by doing it this way we can add a where statement to our view where we only show rows for the actual Project we want to see. When starting to get a few thousand rows this will become necessary.

Portal 2 introduces another performance boost which is that since it knows your start and end dates in the tasks windows, it will apply that in a where statement when running the query, so you will only get the results from the task-table that are included in the date-span you have in the current view (so yes, timeline year will still fetch tasks for an entire year. But a week view will only get the tasks for the week you are looking at, then executing another query when you change the week.)

And while we in portal 1 could only filter on queries in the Tasks table, we are now able to filter on the two other tables as well giving us the possibility to narrow down the result set from those tables as well. In our example this could be on ProjectId, but if we have a large number of Resources (basically employees) in our organization we could also assign different resources to specific projects and by that narrowing down the result set considerably.

Another important thing around performance are indexes of course, and how to create and use them, but that is out of the scope of this specific training.

Scheduler settings – Initialize load

There is another performance setting on the scheduler portlet as well, called Initialize load. This means that if you have listeners, it will or will not load the data until you have activated the listener.

Also set a view that you think will fit your needs under General – Scheduler settings. For example like this:

Create / Save the portlet.

Add and modify data

Under Interactions we set the Flows for creating and modifying our data. Let's start with a custom button to create our resources.

We assume you now know how to put Flow works with menus and roles so that you will have access to the flows from the portal.

Create resource

Create a user workflow that you call “Create Resource” where you ask the user to Name the resource and then insert the value into our resource table and finish with an end step (red ring). Try to get used to using the RowCreatedBy together with the @UserId-flow variable. Also create an InputTable in the green ring, the portal wants this.

Make sure you put the flow in a menu that has a role that your portal user is in so that you can access the flow from the portal.

Try out your new button to create a few resources, feel free to use something that could fit your needs but for this training we are going to help Nasa explore the surfaces of Mars. So let us add three resources:

Operation and Data

So for our Interactions, use Operation and data for Create, Drag and Drop and Resize

Create Task - Scope_Identity()

Create a user workflow that you call “Scheduler Create Task” with the “Operation and Data” Schema Type:

Among other meta data things in the Flow, the green ring is also pre-populated with an empty InputTable table variable and a OperationInput record with an OperationType single value:

For now just save the flow, and connect it as a Create Workflow

Once we have the workflow connected we can create a task from the scheduler by highlighting an area and press enter (which will execute our create workflow). But just before pressing enter we will take a look at how we can check what is going on behind the scene, so first press F12 in your browser to bring up the browser debug console. Make sure our time slot is still selected, and now press enter.

Under the Network tab, find tableAndOperationparameter and check under the Headers section all the way to the bottom where you can see what is being sent in the InputTable and OperationInputData.

So for the create task I can see that my InputTable is empty (makes sense since we do not have anything there yet to work with, we are creating an event), and under operationInputData I have access to a few parameters: EndDate, OperationType, SourceResourceId, StartDate and TargetResourceId.

Now that I know these, I will add all of them to my green ring, and while here I will also uncheck the “startable without input” checkbox (since I do not want this flow to show up in any menus, just to fire from the scheduler portlet).

For pedagogical reasons, let us add our variables to a user step just to validate things are working as intended:

Save the workflow and execute it again from the scheduler (which as you know by now is done by selecting a time slot from the scheduler and pressing enter).

Verify that it looks something like this, and that we now begin to understand how the interaction between the portal and flow works with the scheduler portlet.

Let's connect our “Scheduler create task” Flow that we created for the scheduler portlet with our NewTask stored procedure.

Timezones

Is this a good time to discuss time zones? It might be a topic by itself, but as you might or might not have worked a lot with it, it is often a challenge when working with data with timestamps in different time zones. Trying to put it briefly, the most common solution is to always store the data in UTC (Universal Time Coordinated), and then have an offset of x hours to where the user is. Now this quickly gets complicated as there could be application services (e.g. flow server) and a data storage (e.g. SQL Server) and a client computer with different settings between the browser and the system setting. So there can be many places where the setting for the time offset is fetched from.

And then there this entire thing with the difference between reading and writing data.

If your solution will be used over many timezones and you know that it might impact the experience (meaning that if your workdays are 9am to 5pm, and your users are in two time zones that is just one hour apart it might not matter if the order seems to be placed at 3p.m or 4p.m, while if your users are 10 hours ahead it might mean that the order seems to be placed on a completely different date), we can recommend putting time zone validation in your testing protocols. And this setting on chrome is also good to fictively change your timezone:

Enough with TZ for now, lets get back or our Create Task Flow.

So let's go back to our first user step, and we can either chose to keep or not keep our Operation Input variables for display, it can be good to just have them there for a little bit and then remove them before letting users in, your call.

Add a Text input with the variable name TaskName asking for the user to name the new task (and check the Trim input automatically, always good to rid yourself of trailing and leading spaces).

We will use the Identity column that we used when creating our task table earlier to create an @out-variable from our machine step to save the id of our newly created record. And we will use our start and end datetimes from the OperationInput, and again log what user created the record (@userId)

And we will insert a row into the TaskAssignment table where we use our Identity column from the Task-machine step and the resource id

Save and publish your flow and verify that it works to create a task for a specific resource

Now create a drag and drop task in the same manner, note how we have a new field in our operationInputData – TaskId. Use that as key to change start and end date (and perhaps TargetResourceid if that is applicable for the task at hand).

Since we do not know if we will add user steps to this later I will keep it as a user flow, but with no user tasks in it. Note how we will to also update our TaskAssignmnet table if we want to support moving tasks between resources.

With the knowledge you have now, also create a Resize flow (which can be a copy of our drag and drop, just that we will skip the updating of the TaskAssignment-table)

Summary

We now know how to make use of our SQL knowledge to create a simple task schedule solution with Flow Portal 2 as user interface, Flows as data manipulator and SQL as data storage.