Jump to content

Hornbill Staff DR

Hornbill Product Specialists
  • Posts

    282
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by Hornbill Staff DR

  1. To conclude the topic of 2FA, this functionality is now available (build 3629 and later). ÂÂ
  2. Hi @Stephen.whittle thanks for your post. When it comes to the intelligent capture asset form, assets can be filtered based on the customer's assets, assets shared with the customer (the customer being the user selected in the customer search form), assets that belong to a particular site (when the site selection form precedes the asset form), and assets that are associated to the same company as the customer. This filtering takes place on the relevant tabs in the form and there is the option to show/hide each tab in which provides us the option to only make certain filters available. The "Asset Search Term" simply sets a default value which appears in the search bar. If the user cleared this value, they could type something else and select from the available assets returned. So this field doesn't perform any filtering on the asset records accessible by the form, all assets would still be available (as per any filtering on customer/site/company). I don't think this will help with your requirement. The solution might lie in Hornbill's ability to denote the sharing of assets. Among other things, an asset can be configured as being shared with a Hornbill group. Perhaps explore creating groups (or type General) which represent each of the roles within the organisation (such as Doctors) and sharing assets with the appropriate group(s). On your asset selection form, limit the options to only show the "Shared" assets tab. In terms of managing these groups, it may mean you'll need several additional user import configurations to ensure the users are being placed in the correct groups (i.e. Doctors are being placed in the "Doctors" group) and of course this will be dependent on that information being available in the directory source and in a format that allows the user objects to be filtered reliably using the filter options available to us (e.g. LDAP object filters if the source is Active Directory). While this approach may attract more configuration in the area of user imports, it will should help reduce the complexity of your progressive captures. I hope that helps. Dan
  3. Hi @SJEaton,  the serial approach should technically work, but I suspect it wont have precisely the desired affect when it comes to the checkpoints. In essence, you're wanting to use the checkpoints as an indicator showing when each of the linked requests are complete. When things are configured in series, it means there is a definite order and one thing has to happen after another. If you set your checkpoints up in series, lets say your last one is "Equipment Issued" even if the corresponding linked request is resolved first, that checkpoint wont be marked until the workflow has passed through the other checkpoints in sequence. So ultimately, yes, all your checkpoints will eventually get marked, but there will likely be a situation where a linked request is resolved but the corresponding checkpoint is not checked because the workflow is still waiting for one (or more) of the other linked requests to be resolved first. I'd dispense with the specific checkpoints and perhaps have a single one which indicates that "User Provisioning Actions Complete" or something generic like that. This will only be marked when all the linked requests are resolved. If an agent wants to see what's still outstanding, they should be directed to the linked requests section where they can inspect the status of the linked requests. I hope that helps. Dan
  4. @SJEaton nice, always great to see advanced capability being explored! I'll first cover off how we can wait for a linked request to be resolved. While there isn't a specific business process operation which waits for linked request resolution, we can actually use the operation "Wait for Request Resolution". Indeed, this is typically used to wait for the resolution of the request against which the BPM is running but, if we switch the "Request ID" input param to "variable" and inject a variable which contains a request ID belonging to one of the linked requests, then this node will actually wait for the resolution of the request ID specified. You can get a request ID by using an output variable available in the "Log Request" node which is logging these child requests from your BPM. Now, focusing on your requirement a bit more specifically, you have several child requests which may be resolved in any order. So as you have surmised, you're going to need a parallel processing block that has several "Wait for Resolution" nodes corresponding to each child request. Unfortunately, I don't believe the business process engine is capable of handling multiple suspend nodes in a parallel block. When the engine resumes a suspended process, I believe it sends a generic "resume" instruction which doesn't include any node identifier. So basically if the process is waiting at multiple suspend nodes, when the resume instruction is given by the system, it doesn't know which node it needs to unsuspend. This is what I encountered the last time I tried this approach, and I believe that's still the case. Perhaps you could try and confirm that is so? Dan
  5. Hi Dan,  thanks for your post. These are great suggestions and the Product Team has visibility. Additional multi select actions on the request list would be a great complement the ones already available. If you're interested to see what's coming through the development pipeline, the current roadmap can be viewed via Hornbill Configuration > Solution Centre > Roadmap Library. This area lists all the Hornbill applications and the current development stories working their way through. For the benefit of others who may come across this post, I'll leave a link to the current multi-select actions available in the request list: https://wiki.hornbill.com/index.php?title=Multi-Select_Actions Dan
  6. @chriscorcoran thanks for your post. There's a handy quick reference available on our wiki which describes the main request table (h_itsm_requests).  It can be found here: https://wiki.hornbill.com/index.php?title=Table_Info:_Main_Request_Table . There's also the "Entity Explorer" built into the Hornbill Service Manager Application which describes all the tables and columns (see image below) and some info on our wiki here: https://wiki.hornbill.com/index.php?title=Application_Entity_Viewer If I've understood your needs correctly, I believe you want to see what the response and resolution targets are for each request. The response target is stored in h_respondby and the resolution target is stored in h_fixby (the resolution-related SLA columns use the word "fix" in their names). h_respondby and h_fixby store the timestamp of the current SLA targets. These columns are also available to display in your request list. This allows agents to sort using either the response or resolve targets which allows you to sort by those requests that will breach soonest. I hope that helps, Dan  ÂÂ
  7. @Malcolm it appears we were both led down the garden path thanks to that description in the entity viewer. As Steve has indicated, apparently that column (h_email_datelastsent) is only populated under a particular circumstance, namely when you have a business process running against the request which contains a "suspend wait for email to be sent" node. Personally, if the column is there one would think its sensible to populate it at all times when an email is sent from a request, surely the suspend node could then still reference it in some way? Anyway, I'm sure there were reasons for the design, even though I can't fathom them! Sorry I can't be of further assistance on this one. Dan
  8. Hi @Malcolm thanks for your post. I believe you'll be referring to the column in the table h_itsm_requests called h_email_datelastsent. Using the Entity Viewer I've checked the description of that column and it should be storing "the last date of when an email has been sent from the request". So we should be able to use this in a report to understand when the last email was sent from a request. Obviously, it won't tell us who sent the email (but one would assume that this would usually be the owner of the request doing the sending), or to whom it was sent. To be honest, I would think that the who doesn't matter so much as you're probably most interested in confirming that some outbound correspondence has actually taken place within the last x days. From my checks, I can't see this actually updating when I send an email from a request so I'll have to defer to development to see what's going on. Dan
  9. Hi Dan,  thanks for your post. From what I understand currently, MS Project Online doesn't offer an integration opportunity that can be packaged into the Hornbill iBridge. However, Hornbill iBridge does offer the ability to trigger Flows in MS Power Automate. This offers lots of opportunity when it comes to integration with Microsoft products. The Power Automate operations available for MS Projects Online can be found here: https://powerautomate.microsoft.com/en-us/connectors/details/shared_projectonline/project-online/ Once you've built your Power Automate Flow, simply use the Hornbill iBridge operation Microsoft > Flow > Trigger Flow to trigger the desired flow. I hope that helps Dan
  10. Hi @Jeremy,  its an interesting ask. Typically, I would never advise that a two-stage closure set up is removed. An exception might be in specific requests (e.g. new starter, hardware,) where the process is established and reliable in fulfilling the customer needs. Here a two-stage closure may not offer value because the bpm is helping to ensure things are delivered the right way, every time. As I'm sure you're aware, the period between resolution and closure is intended to give the customer opportunity to confirm the resolution has been successful or re-open the request. I appreciate the distinction between resolution and closure is sometimes lost on the customer, but going directly to closed sets the precedence that closed requests can be reopened. Where is the line then drawn? Can a customer come back and open a closed request 3, 6, or 12 months later? Effectively, the ticket doesn't have an end to its lifecycle. If the bpm was adjusted to ensure resolution emails were sent after a ticket was reopened from closed, the bpm would technically never complete because you have to keep the bpm active in order to catch a reopen action and then loop to a wait for resolve before then sending the resolution email again. There are elements in Hornbill that are specifically designed to instil good practice. The two stage closure, "Its working/It's still broken" buttons, and feedback only being left once closed are the three elements involved here. There isn't really a way to circumvent these best practices without causing problems elsewhere. From my perspective, closed is closed and shouldn't be reopened. I'd suggest experimenting with the duration of the two stage closure, or remove the resolution text from the email and have a generic notification stating that their resolution is available in the portal. Forcing the customer to interact with the portal, thus building familiarity. Other campaigns outside of the tool to raise portal awareness and how feedback can be left will also help. Dan
  11. I see. So the motivation is to increase the quality of feedback? Just to claify, I notice you state that the customer "cannot give feedback" which might imply customers simply aren't bothering to leave feedback. Do you find you receive a good number of feedback responses and it is just the quality of feedback that's lacking? ÂÂ
  12. HI @Jeremy thanks for your post. I'd be interested to know why you feel that feedback can't exist in a two-stage closure situation, can you elaborate on your decision to remove two-stage closure? Dan
  13. Hi All,  thanks for reporting the issue(s) and taking the time to supply the supporting screen shots. Hornbill Support are currently on the case and will provide update in due course. Dan
  14. Hi @JAquino,  thanks for your post, The assets that are owned by, used by, or shared with a user can be found in one of two ways: From the asset list, use the asset search option (the stand-alone magnifying glass located next to the quick filter). Selecting the "Used By" condition and specifying a user will have the desired effect and return asset records shared with that user. The alternative is to approach it from the user's Profile. A users profile can be accessed from the Co-Worker directory (https://wiki.hornbill.com/index.php?title=Co-Workers) (Home Icon > Co-Workers) then search and select the specific user to view their profile. A Co-Worker can also be searched for using the Global Search bar at the top of the Hornbill interface Assets are available in the "Service Manager" tab as shown in the image: The quick filter doesn't act upon the list of users an asset is shared with, it only acts upon asset attributes which exist directly against an asset record. I hope that helps, Dan
  15. Hi @Malcolm,  the following wiki page describes how to identify whether a user owns Documents or Libraries and to change ownership: https://wiki.hornbill.com/index.php?title=Change_Ownership . With regards to identifying which tasks a user is assigned, what @nasimgsuggests is a good idea. As the users manager you are able to see the tasks currently assigned to that user via the "My Activities" interface. However, only the task owner or task assignee are able to complete or edit a task (the specific detail on tasks is available here: https://wiki.hornbill.com/index.php?title=Activities ) There is a very useful Service Manager feature enabled by the setting app.experimental.advancedRequestTaskCompleter The advanced task completer feature extends the concepts of Team membership and Service supporting teams enjoyed by requests, to tasks. It should be noted that this only applies to tasks that are associated to requests. Detail can be found here: https://wiki.hornbill.com/index.php?title=Service_Manager_Experimental_Features I hope that helps. Dan
  16. Hi IT Specialist,  thanks for your post. Hornbill User accounts are archived by setting the user "Status" to a value of "Archived". User accounts with a status of "Archived" will not shown in any user search or other user-related lists or menus. Changing the status of an individual user account The status of a Hornbill user account can be changed via Hornbill Administration > System > Users & Guest Access > Users. Find the account in question and click to view the details. The status field can be found in the location shown: The desired value can be selected from the drop down menu. Using an LDAP Import Configuration to update the status of a Hornbill User Account If you are importing and managing your Hornbill User Accounts based on the contents of your active directory, its possible to update the Hornbill User Account status via an LDAP import configuration. It will be necessary to create a new LDAP Import configuration specifically for this purpose. This is done in Hornbill Administration > System > Data > Data Import Configurations The documentation relating to LDAP import configurations can be found here: https://wiki.hornbill.com/index.php?title=LDAP_User_Import The key information which should be considered is: It will be necessary to define a DSN and LDAP filter query to identify the user objects which must have their corresponding Hornbill user account status set to "Archived". In the User Options tab, the "Status" should be enabled ("Update Only" is typical but this will depend on what other LDAP imports are operating). The value should be set to "Archived". All other User Options can be set to "No Action". I'd suggest taking time to digest the LDAP Import wiki page, particularly the "Testing Overview" section. Should you prefer, if your organisation has a Success Plan you may have available credits which you could could draw upon and have one-to-one guidance from a Product Specialist. Alternatively, Hornbill Expert Services could also be procured. I hope that helps, Dan ÂÂ
  17. Hi @samwoo, glad to hear that's resolved it! I'll feed our thoughts back to the Product Team to see if the options can be adjusted to make the requirement for a "From Status" more obvious. Dan
  18. Hi Samuel,  thanks for your post. Whenever I use the suspend wait for status change node I always set a "From Status". This is something I've always done but I do acknowledge that the "From Status" is not mandatory, maybe that's a question for the Product Team. Have you tried setting this? Looking at your design, I assume that at the point it suspends the request will be have a status of "open" in which case this is what you should specify as your "From Status". On a side note, I assume the status of the request is a relevant factor in your process design? If you're just waiting for an expiry time there is a specific node for that called "Await Expiry". I hope that helps, Dan
  19. Hi @RobW thanks for posting. Hornbill Business Process features user-related operations, on of which can Archive a Hornbill user account. An archived user doesn't consume a subscription so in essence setting a Hornbill user account to a state of "Archived" will release the Hornbill Subscription automatically. The operation I'm talking about can be found in a typical Hornbill Automation node as follows: The "User ID" would be the Hornbill User ID of the leaver and could be fed from a variable. The exact variable will depend on where the leaver information is stored against the request or if it was captured in a progressive capture form when logging the leaver request. This is one operation that I've previously advised and worked with customers to implement. With the Hornbill ITOM module it's also possible to manage AD user objects and AD groups through the Windows Account Management Packages available with Hornbill ITOM https://wiki.hornbill.com/index.php?title=ITOM_Package_Library I hope the Archive Hornbill User account operation can offer some value but there's definitely opportunity for further automation in a leaver process. Thanks Dan
  20. @Josh Bridgens great, I think this should be enough for development to go on so I'll leave it in @ArmandoDM 's capable hands! Dan
  21. thanks @Josh Bridgens that's useful info, I've just removed the attachment for security reasons, just in case it contains anything sensitive. Based on the console error it looks like something around FAQ's. Dan
  22. @Josh Bridgens please could you also confirm what self-service interface your users access? We can identify this if you can provide the URL. Thanks, Dan
  23. Hi @Josh Bridgens can I ask you to inspect the browser console when you click into the service. Depending on the root of the problem, something may be shown here in the form of a red error. The browser console can be opened by pressing F12, and then click on the tab labelled "console". Once you've selected this proceed to click the service in My Services (which should take you to your available catalog items) and see if anything appears. The image shows how this appears in Chrome but other browsers have a slightly different interface, however the principle is still the same. Thanks, Dan
  24. Hi @Stephen.whittle,  thanks for your post. I've had a look into this requirement and it isn't possible to get the output you need. To achieve this, the measure interface would need to allow us to obtain the average of a calculated value because the age of a request isn't something that's stored in the database. Storing such a value wouldn't be the done thing as age is a situation that changes as time passes and would be obtained by calculating the difference between the date the request was logged and now (the time that the report was run or the measure was sampled). If you're exporting data out of Hornbill you can use the following query to get what you need: SELECT   ROUND(AVG(DATEDIFF(NOW(), h_itsm_requests.h_datelogged)),1) AS Average_Days_Open FROM h_itsm_requests WHERE h_status NOT IN("status.resolved", "status.closed", "status.cancelled") You can add criteria to the WHERE clause to get the average age of whatever set of requests you're interested in. The ROUND function is simply there to ensure the result is to a reasonable number of decimal places. What we CAN achieve through a measure in Hornbill is a count of requests that are older than a certain number of days. Not exactly what you want but could be used to give an indication of improvement if the number of requests older than a certain number of months was decreasing. I'd set this number of months to the threshold that is unacceptable i.e. if you believe that the current state of your Service Delivery operations should mean that there should be no requests older than 3 months, then lets count how many requests are older than three months. The target would be to reduce the number of requests older than three months down to zero (or as close to zero as possible). Now, I'm not saying this is a perfect substitute for monitoring the average age and therefore I've fed back the requirement to the Product Owner. While COUNTS are quite rudimentary, they still offer value in that they still allow you see some change and thus contribute to monitoring whether a situation is improving or not. So, to create a measure that will sample the "No. of tickets older than 90 days" would be as follows: Here's the WHERE clause for this COUNT: h_datelogged < DATE_SUB(CURDATE(), INTERVAL 90 DAY) AND h_status NOT IN ('status.resolved', 'status.closed', 'status.cancelled') Other points to note are that the date ranging columns are empty, and I've specified some saved data columns. The reason the date ranges are empty is because for the purposes of this measure we don't want to group the request records by any date value. i.e. it doesn't matter when the request was logged or resolved or closed. What we want is a snapshot of the number of active requests that are older than 90 days. I use the word "snapshot" because this situation will only be true at the point the sample is taken. It's worth noting that this configuration of measure can't be sampled retrospectively in that you can't go back and determine how many requests were older than 90 days at this time last month because the sample is taking a picture of what the data looks like at that point in time. You'll notice that when you resample this measure for the first time, all the samples will be the same, I recommend setting the sample history to 1 the first time you do this. Once it's sampled, increase the sample history to a more appropriate amount and then let sample data build up naturally as per the frequency you're set (daily/weekly/monthly/whatever). The saved data columns are useful when putting this measure in a widget as they'll show how many request are older than 90 days for each priority, team, or whatever saved data column you add. I'd use a chart-type widget set with a data type of "measure Samples group by". I hope that helps, I appreciate that it's probably not exactly what you were after but may go someway to supporting the insight you're trying to gain into your requests. Dan
  25. Hi All,  since I posted my original explanation above (which included details on how to join the table h_cmdb_links with h_cmdb_assets using a SQL CONCAT function) there has been a change to h_cmdb_assets to include a column to hold the asset URN. This is called h_asset_urn. This makes writing the above report much easier in that we can just select the columns from each table, without needing to use the CONCAT function in the custom criteria box. The better configuration is shown below: Thanks, Dan
×
×
  • Create New...