Jump to content

Recommended Posts

Posted

As part of a Corporate Programme to improve Ways of Working, we are looking to improve as many of the authority’s processes as we can. By improve, I mean remove repetitive, low value, low complexity tasks by streamlining and automating and pull everything into one place for ease of access (an Employee Hub, for instance).

For example, when a new member of staff joins NCC, the recruitment lead should be able to enter the starter’s details, press a button and all of the relevant processes should begin (payroll and pensions informed, badge created, appropriate computer ordered and prepped, AD account created, etc., etc.). Progress should be monitorable, and notifications should be triggered when time limits are passed, or the process concludes.

Has anyone achieved anything similar / along the lines of the above utilising Hornbill and would be happy to share their experiences?

Posted

We have quite a mature starters process in Hornbill workflow but most tasks are manual. We have identified about 24 human task nodes across starter and leaver which we might be able to automate::

 

Workfow Node
Leaver Surgery: Set ID Pass to be Expired
Leaver SDesk: Set Account Expiry in AD
Leaver Asset checks (Laptop, Mobile, Numbers)
Leaver Surgery: Trigger disconnect DDI/Avaya
Leaver Surgery: Set advance Vodafone/EE cancellation
Leaver Surgery: Delete Mobile Device in Intune
Leaver Surgery: Delete laptop in Intune
Leaver SDesk: Manually disable/delicense the leavers account in AD
Leaver Apps removal requests
Leaver SDesk: Check AD if Leaver has anyone reporting to them
Leaver SDesk: Manually run and check the AD removal script
Starter SDesk: Create External Guest User Account
Starter SDesk: Create AD Account (both)
Starter SDesk: Create Exchange On-Prem Account (both)
Starter DOUBLE-CHECK EMAIL
Starter SDesk: Grant Access To Requested Calendar
Starter SDesk: Grant Access to Distribution Lists
Starter Create new Request for Litigation Hold
Starter SDesk: Manual update of AD with Teams DDI Number
Starter SDesk: Search for and Link the Telephone Number Asset
Starter SDesk: Search for and Link the Avaya Agent Asset
Starter SDesk: Manually add Starter to Avaya AD Groups
Starter SDesk: Upload Customer's Picture to AD Profile
Starter SDesk: Move direct reports to the New Starter

 

But in the long-term, now the finance and HR teams are getting organised with our ERP system, we are hoping to automate starters using information entered in that system to trigger others (at present ICT work independently). That may mean, for example, that the ERP system directly creates an AD account and adds people to the relevant groups and roles there which cascade the accesses. That may also use the API with Hornbill to create one or more requests for the ICT aspects like hardware and software.

In short, we don't have the automated comprehensive system but we are also on the journey. We are also local authority in Essex.

Posted
8 minutes ago, Dan Stewart said:

when a new member of staff joins NCC, the recruitment lead should be able to enter the starter’s details, press a button and all of the relevant processes should begin

So the first question would be, where is the button?
If this is in Hornbill, then creating the other actions should be relatively straightforward, deciding on the best tool for the job (Full Automation, a Human Task, or a Request) will require some planning.
If it is in another system the first requirement is getting that information into Hornbill so that it can be used to automate the identified processes - this can be done in a number of ways ranging from manual input, through email Routing Rules, to API calls.

10 minutes ago, Dan Stewart said:

Progress should be monitorable, and notifications should be triggered when time limits are passed, or the process concludes.

With linked Requests (these can be automatically spawned from a Workflow, generated manually by Analysts, or a combination) you have full tracking, with SLAs/Targets if required.

Posted

Hi Steve. In this scenario I am talking about it being in Hornbill. Is there any online resources where I can see what the art of the possible is in terms of multiple requests or tasks being raised - these would need to span multiple service domains - given the different natures of the actions required. Is that possible?

Posted

There is a function within a workflow to create a new request, so provided you know what is needed from the original capture, you can have 1 parent request which will use that information to create child requests from the capture information. Those requests can be against any service and catalog item (allowing for the normal subscriber/supporting team restrictions).

  • Like 1
Posted

@Dan Stewart The Log New [Request Type] nodes will allow you to generate a Request within a Workflow.

See the section in the Service Manager Workflows page.

Requests generated in this way will automatically be linked to the existing Request, but once raised are completely independent unless you configure both Workflows to specifically interact with each other, which is a little out of scope for a forum discussion.

  • Like 1
Posted

Hi Dan

We have a new joiner process that from the one call will raise: New account / Drive access / email setup / Mailbox access / hardware requirements / remote access / multiple systems access. This is all driven from one IC that then spawns multiple child requests from the parent request, so it sounds similar to what you are looking to achieve. One thing I do find is having everything from one IC becomes very hard to maintain / update so something to bear in mind - the more you add the more complex any small changes are. Happy to do a demo / discuss further if required of course


Many thanks

Adrian

  • Like 1
Posted
23 hours ago, Steve Giller said:

The Log New [Request Type] nodes will allow you to generate a Request within a Workflow.

@Dan Stewart you can then use workflow to Update Request Custom Fields and direct them to place information from the parent record into any field in the child record; effectively picking-off the relevant information for each child record type. One thing to bear in mind with this feature is - because Hornbill is asynchronous - you need to distance the update node from the create node to ensure the child request is fully created before you try to update it's custom fields (else it tries to update the fields on a non-existent request and you get no data passed over.

Posted
8 minutes ago, Berto2002 said:

else it tries to update the fields on a non-existent request and you get no data passed over.

Just for accuracy, I have to point out that this is not what happens, if you try to update a non-existent request you get an error.

There can be unexpected outcomes if you trigger multiple actions on the same entity at the same time but the Entity has to exist before you can do this.

Posted
23 minutes ago, Berto2002 said:

One thing to bear in mind with this feature is - because Hornbill is asynchronous - you need to distance the update node from the create node to ensure the child request is fully created before you try to update it's custom fields

Any guidance on how to ensure you've got enough distance? 😉

Posted
1 minute ago, JJack said:

Any guidance on how to ensure you've got enough distance?

The node I call a "time waste node" is a Get Request Details (of some kind) which then cycles through the decision until the requestId is populated and the decision goes onwards. The node immediately afterwards was then one to update custom fields on the new RequestId

image.png.94f1514eccb9f393a6abafe5b834b529.png

This worked for a few years but last month I started to get the custom fields not updating on the new request (and no errors flagged). It seems while the requestId was present, that was still not long enough to then safely add more data to it. I raised this with Hornbill support and received the following response:

"Without replicating and testing that exact scenario (which would be difficult) I cannot say for sure.  Although it seems to be in line with a number of timing issues I have seen recently. If the update Custom C node can be moved painlessly further down the line then I would certainly do it."

I moved it two further nodes down and I am currently monitoring such requests:

image.thumb.png.7727806cbc1d94288916a7997d2e466c.png

I would like to see an enhancement that all the Request fields are included as configurable when creating a new Requestso all the data can be pushed in one go without this hazard but that would be quite a bit of work.

Another way to do it would be to have a suspend with a 1-minute 'timer' on it but these can give variable outcomes. If you don't mind waiting 1-10 minutes it's nearly ok. Remember also that the suspend durations only count in business hours so if the request comes in at 5.59pm, a 1-minute wait may release at 7am the next day...

Here's another hazard to look out for: when you spawn a new request, it starts its own workflow running immediately. If you have any delay on the update nodes (as above) then there is a good chance the first part of the new workflow will whizz through before the values reach the new request. So, if you depend on those values for processing the child request, be sure to introduce a delay in that child request to ensure the values are there when you need them.

We have tried several approaches to this including 1) having the new workflow detect something about the native request (such as summary "starts with") and then a suspend 1-minute, 2) thinking ahead and ensuring the dependency is always on the second stage or 3) having a human task present early-on if appropriate which always allows enough time

  • Like 1
Posted
5 minutes ago, Berto2002 said:

Here's another hazard to look out for: when you spawn a new request, it starts its own workflow running immediately. If you have any delay on the update nodes (as above) then there is a good chance the first part of the new workflow will whizz through before the values reach the new request. So, if you depend on those values for processing the child request, be sure to introduce a delay in that child request to ensure the values are there when you need them.

 

It feels like this long-standing roadmap item would help in this particular scenario

image.thumb.png.bdc92028239b5dd645abdcb9cf833ecf.png

  • Like 1
Posted

I also wonder if the still experimental timeplans functionality could also help with the timings of a workflow?

I've not turned it on yet, but the setting is still available - https://live.hornbill.com/<INSTANCE>/admin/platform/advanced/settings/

image.png.a63a11ab8f2055e97df10dcb8b84a8a7.png

Posted

@Berto2002 Many thanks for such a complete response. It does seem this feature does not have adequate support. It's something we need to work, preferably without uncertain workarounds, so do hope it gets better.

Posted
12 minutes ago, Berto2002 said:

which then cycles through the decision until the requestId is populated and the decision goes onwards.

Are you saying here that you're testing for a Request Id for the "Request that the Create LOB account request" node generates?
If so your "Time waste node" will never happen.

A Log Request node always returns a Request ID. That's one of its output parameters.

If your Workflow ever could go down the "No Match" route what does the "Time waste node" do?
It looks like this would simply loop forever (or until the Workflow engine detected the infinite loop) because it does not appear that you're ever updating the condition that you're testing.

Posted
9 minutes ago, JJack said:

It's something we need to work, preferably without uncertain workarounds

Can you define what "it" is?

The scenario here is really about building a workflow that sends instructions in an incorrect order - the difficulty is that because we present a codeless environment and the order of the instructions are built on the underlying code there will always be edge cases that can be found. The "delays" talked about are in the order of milliseconds, so if we have a defined use case rather than a generic "it" we can look into what issues you might have.

Posted
29 minutes ago, Berto2002 said:

"Without replicating and testing that exact scenario (which would be difficult) I cannot say for sure.  Although it seems to be in line with a number of timing issues I have seen recently. If the update Custom C node can be moved painlessly further down the line then I would certainly do it."

Ok, I don't know who gave this advice, but it is incorrect.

There seems to be a misconception enforced here where node execution in a workflow is asynchronous. This is incorrect. Node execution within a workflow is always synchronous, meaning if there is a Node B after a Node A, the node B will only execute once the node A execution completes. To put this in the example above, if a workflow has a "raise new request" node, the workflow execution past this node will continue once the new request has been raised and NOT while the request is being raised.

The workflow nodes themselves can perform various actions that are themselves asynchronous (integration nodes/cloud automation nodes come to mind), but the nodes within a workflow, a sequence on fonde within a will execute synchronously.

Posted
47 minutes ago, JJack said:

Any guidance on how to ensure you've got enough distance? 😉

I think we only have this on one process at the moment, though I've recently started work on another that will need something like this.

I spent at least an afternoon searching for the best way to solve this. My current solution is to early on suspend the new request pending a custom field that would normally be set in capture, and then later on, when it loops round on itself again, there's a suspend for 1 minute before it tries to set the custom. We use it for tracking certificate expiries though, so working hours isn't really relevant, because the timescale is generally months or years.

Posted

Will say this again and will bold it and underline it:

Node execution within a workflow, a sequence of nodes execution, is synchronous.

If there is an issue in a scenario where there is a raise new request node followed by an update the newly raised request, if the update does not happen, this is NOT because node execution order (e.g. because of asynchronous execution, this is not the case, the nodes execute synchronously). This can potentially be a caching issue, but is something that should be working, but it isn't, so something Hornbill needs to investigate and address.

EDIT: the scenario where once a new request is raised, which has it's own workflow running, between the "old" request and "new" request, if both have running workflows, they will execute asynchronously because, in essence, they are different workfows so they will run asynchronously... so one would need to be aware of this apsect when designing the workflows for the "old" and "new" request.

Posted
4 minutes ago, Victor said:

the workflow execution past this node will continue once the new request has been raised

@Victor it's good to hear this but it is not congruent with my experience or what I have heard from support over the last two years.

The symptom experienced is: Custom Field values are (sometimes) not found in the Created Request when the Update Request node directly follows

The advice given has been: delay the Update Request node (somehow) so time elapses before it runs; the inference being that the Request has to finish creating

Experience of the advice: that seemed to work and has been stable for the last two years until we had a recent recurrence and were advised to push it further back

4 minutes ago, Victor said:

is something that should be working, but it isn't, so something Hornbill needs to investigate and address

This is also welcome and I think what needs to happen; but my support request gets closed with the advice to push the update node further down the chain... what do I do?

Posted

@Berto2002 I'm not sure who provided this advice (I tried searching in our instance, but my searching skills might not be the best), but the advice is not accurate. If this is a caching issue, it might explain why running the node with some delay could help, but it's not a real solution. I wouldn't suggest it as a workaround because it could lead to an unpredictable situation. I'm not exactly familiar with the issue you mentioned where you have a "raise new request" followed by "update the new request," and it's not updating the new request. If this is the case, I must stress that it's not about how nodes execute (sync/async). Delaying the update node might help in some cases (if caching plays a role here), but there's no guarantee, as you've seen the issue continue even after using what seems to be an incorrect "workaround." I would still recommend that Hornbill take a closer look to identify the root cause and a fix, assuming there's a confirmed defect.

Posted

Wow - thanks for all the responses, and apologies for opening a can of worms 😂 i'll get back to those who kindly offered to see demo / examples once i've made the case to our Senior Management that Hornbill should be our preferred option for doing this and not trying to build something from scratch in 365 Power Platforms.

Thanks for all your help.

Posted

@Dan Stewart I warmly recommend checking out our Hornbill Academy portal: https://academy.hornbill.com/. You'll find a wealth of valuable introductory and advanced courses there, including ones on automation and workflows. It's a fantastic resource to help you get up to speed with what's theoretically possible to build in Hornbill.

  • Like 1

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...