Jump to content

Set up workflows for parent and linked requests


Recommended Posts

@Paul Alexander, yes I was doing the same.  But you mentioned "for some reason when I update the PARENT request custom field (I just add a "1" and set the node to 'Append text', I end up with a 1 1 (with a space in the middle). " so I was asking if the custom field value has spaces between the 1's and an extra 1?? 

Link to comment
Share on other sites

Guest Paul Alexander

Oh I see....no, sorry, that was badly put! 

I won't bother to try to explain it more because I'll only make it worse :D


Link to comment
Share on other sites

Guest Paul Alexander

I THINK it should look for "1 1 1 1 1 1"

The way to check is to see what actually IS in Custom Field 30 of the request. You can find this by looking in the Database Direct and using this code:


SELECT h_custom_30 FROM h_sm_requests_extended where H_request_id = "(Your SR Number)"

Link to comment
Share on other sites

@Steve Giller - I'm outputting to the timeline and its blank. 

@Paul Alexander - I haven't stored the Parent Request ID in a custom field, how do I do that?   

I have various 'update custom field' nodes in the parent request too that populate Custom 30 with a "1" if the linked request isn't triggered (as per Steve's suggestion) so don't understand why I need to store the Parent Request ID for these one's? image.thumb.png.c38ef6bb7588759e69b3cac7da4a56a8.png

Link to comment
Share on other sites

28 minutes ago, Steve Giller said:

I'm sure it can be made to work, but it's gone a bit beyond the "thrashing it out on the forum" level at this point - the basic principals are fine but now we're past that and fine-tuning Workflows it's moving towards the Expert Services arena, I'm afraid.

@Steve Giller are you able to just confirm that the various 'update custom field' nodes in the parent request That I have that populate Custom 30 with a "1" if the linked request isn't triggered (as per your suggestion) don't need a node to store the Parent Request ID right? 

Also, if I'm copying Custom 30 to the timeline correctly as its still showing as blank.


this is what is showing on the timeline :


Many thanks

Link to comment
Share on other sites

  • 1 month later...

OK so I've gotten so far but I still have issues with this set up.  

I have added 'update custom 40' nodes to just the linked request 'no match' branches within my parallel process a in the linked requests themselves to begin with but it wasn't working.

I therefore added 'update custom 40' nodes to all branches within my parallel process, and in all linked requests.

This works if all of the manual tasks raised within the parent request are completed before all of the linked requests, proving that the approach works, but it doesn't however work when the linked requests are completed before the manual tasks.  It just stalls at the 'suspend and wait for update' node.  No errors given, just waits there even though every flow within the parallel has completed.  I have checked in Database Direct and custom field 40 has the correct number of '1's in it that I expect for the parent request to move onto resolution.

I must therefore be missing something.

This is what's after the parallel process end...


This is the decision that check custom 40 for 10 x '1's... 


The parallel process is far too large so the screenshot is tiny, but happy to share the process with whomever might be able to assist. @Victor @James Ainsworth @Steve Giller ???


Link to comment
Share on other sites

I haven't read all this to check context but how about:

  • Nominate a custom field in the parent ticket with unlimited text, say h_custom_31
  • Include in your workflow for each child node that it will update the parent's h_custom_31 (with the "append" option set) with a set 'string' (i.e. a codeword) when it is resolved; say the string is "SecPass1" or "SysAcc2" (something a user would never type)
  • In your parent node, have the ticket suspended "waiting expiry" with a suitable delay of, say, 2 hours
  • After the expiry, a decision node examines h_custom_31 to see if it yet has each one of the expected (7?) strings from the (7?) child tickets
  • If no (e.g. it only has 6 of the 7 code words), then it returns to a suspend "wait expiry" state for another 2 hours before checking again
  • If yes (i.e. all 7 code words are now present), then move on and resolve the parent


Link to comment
Share on other sites

@Berto2002 indeed, it was advised before, as a possibility:

On 8/11/2022 at 8:58 AM, Victor said:

1. Have the main workflow in a Suspend Wait For Update - it will resume when an update is made on main request

2. From the linked request, when resolved, push an update for a custom field of the main request

3. From the linked request, when resolved, push an update on the main request, e.g. "Linked request 1 was resolved" or anything really, it just needs to be an update

4. Have the main workflow refresh request details and check the value for the custom field

At this point 2 - 4 will be looping and 2 will be updating a different custom field for each linked request and step 4 would be configured to check values for all these custom fields. If not all values are set, loop back to Suspend Wait For Update. When all values are set, meaning all linked request resolved and updated their correspondent custom field in main request, continue the main workflow.

Just an idea, might not be the best, and it would more or less work for a fixed number of linked requests. It also needs available custom fields in main request. One could possibly explore the option to update one custom field only by adding content to this field each time a linked request is resolved then perhaps check for this value in the main request, e.g. how many characters or similar are in the string...

However your solution is a bit less efficient, you don't need the 2-hour (or any delay), which means you create and trigger unnecessary events. With the above, you can have the main workflow resume only and when each linked request has been completed.

Link to comment
Share on other sites

  • 5 weeks later...

Hi it's me again!  So here's the thing...

I've got my '1's working well now I think and have test update timeline nodes in the BPM for now so I can check that all 'update custom field 40 with a 1' nodes are being passed through correctly which they are.

Sometimes however when I'm testing, even though it's passed through them all and timeline updates are being received to prove this, the node that then updates the timeline to display what's in custom 40 so I can see that it's appended all the necessary 1s shows there is one less 1 than there should be in custom 40.  Sometimes this happens and other times it doesn't. 

I thought it might be to do with the speed of all the 1's being appended to the custom field and it not quite catching up with itself before it updates the timeline to show what's in custom 40 so i added a wait node that waits a minute prior to updating the timeline to show what's in custom 40. this however hasn't made any difference.  Sometimes its correct, sometimes it isn't.   

Whatever it is, its meaning that sometimes my test parent requests don't resolve as they should because it's still waiting for another '1'.  Even when it goes around the loop again when the final task is complete, it doesn't seem to rectify itself, it still thinks it has a 1 missing when I know for a fact all 'update custom field 40 with a 1' nodes have been passed through as they updated the timeline accordingly.

Rather baffling!   

What do you think?  Have anyone come across similar when you have been testing?  Any other ideas for how I might be able to look in the backend to try and figure out what 1 it thinks didn't append to the custom field? 


Link to comment
Share on other sites

Just to add, I also tried adding a manual update to take it around the wait loop again to see if it caught up with itself but after a few hours, custom field 40 still isn't populated with the correct amount of 1's even though the test nodes that update the timeline suggest otherwise.

For context, of 20 tests I've run today, 6 have this problem. It's not filling me with confidence that this workaround is fit for purpose as we can't have this happening these many times in our live requests.    


Link to comment
Share on other sites

  • 2 weeks later...


Hi, I had a quick read through this thread, very interesting and creative approaches here to make this work, but there are a couple of things that might help expand knowledge for anyone trying to achieve this sort of thing. 

First thing to note is, the business process engine is highly asynchronous, inasmuch as, it does lots of things in parallel all at the same time, and, there is no guarantee over which things get done in which order when running two or processes side by side.  This is also true when it comes to parallel processing, although each branch will process sequentially on the initial branch point,  the order in which each branch path processes  is not controllable. Once a processing branch hits a node that makes some other call, the BPM engine effectively suspends that node waiting for an event.   For the most part, asynchronous processing is not intuitive, things do not happen in the order you might assume.

The rather creative solution of using a custom field to "count" the number of child requests, basically incrementing a custom field numeric value each time a request is spawned and decrementing once the child has completed by waiting for that in a parallel processing path might seem logical but it will almost certainly not work most of the time.  The problem is, changing the value of a custom field will involve a database read, and write, and the reason why this will not work reliably is that operation is not atomic, everything is asynchronous, so from a timing perspective, you will almost always see a miscount of some form.  Trying to do this with a custom field is looking at the BPM as a programming environment, and its not really designed (intentionally) as a graphical scripting/programming environment. 

So I would recommend you do not try the above approach because it will, at the very best, be mostly unreliable.

The only (and correct) way this can be achieved will be for us to implement a dedicated operation called "Wait for Linked Requests to Complete", this would present as an additional option to the below list of options, and wait for linked child requests to complete, suspending the BPM process. 

I hope that explains why what you are trying to do will not work. 



  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...