Jump to content

DanielRi

Hornbill Product Specialists
  • Content Count

    227
  • Joined

  • Last visited

  • Days Won

    18

DanielRi last won the day on March 23 2018

DanielRi had the most liked content!

Community Reputation

63 Excellent

About DanielRi

  • Rank
    Product Specialist

Profile Information

  • Gender
    Male

Recent Profile Visitors

2,020 profile views
  1. Hi Malcolm, hope you're well. I've edited the post to conceal the company specific information that was in there and removed the attachment for the same reasons, you can never be too careful! Looking at the bounce-back, the reason its given is because the email address " system.administrator@hornbill.com" doesn't exist. This means one of the recipients has this email address specified in their use account: The user account in question will be the "Admin" user, which has this email address set as default, and the reason why its trying to send the notification to this email address is because the admin user will be part of the team, quite common when we've been testing/configuring things. This can be addressed by one of the following: 1) Disable assignment for the "admin" user for the team in question. Assignment options for team members can be managed via Hornbill Administration > Applications > Hornbill Service Manager > Configuration > Service Desk https://wiki.hornbill.com/index.php/Service_Desk_Administration 2) Remove the "admin" user from the team in question. this can be done via the Organisation Structure in Hornbill Administration > System > Organisational Structure https://wiki.hornbill.com/index.php/Organisation (see the section "Removing Users from a Group") 3) Change the email address of the "admin" account to a valid email address I hope that helps, Dan
  2. Hi James, as Trevor says, for live chat to be available to your basic users, each basic user must have the "Portal Chat Session User" role assigned to them. The method he describes will allow you to quickly add this to all your existing basic users now. Longer term, you may wish to consider including this role in any user import involved in managing your basic user accounts. Dan
  3. Hi Adrian, thanks for your post. This is expected behaviour as there are indeed posts that haven't been read by the owner (in this case the owner is "No Owner"). Once an owner is assigned, and has reads the posts, the request will be considered read and so the light yellow colour will disappear. The unread colour can be change in the following Service Manager setting: webapp.view.ITSM.serviceDesk.requests.list.unreadColour I hope that helps, Dan
  4. Hi Izu, thanks for your post. To get the the root of this issue it's worth revisiting how the asset import utility operates (the same principle can actually be applied across the range of Hornbill's data import utilities). If the utility recognises a record already exists in Hornbill, it will update that record. If the utility determines that a record does not exist, it will create a new record. So the focus of the investigation should be on what the utility uses to determine if a record already exists in Hornbill or not. Considering the database asset import specifically, this is achieved through the "Asset identifier" configuration that exists for each asset type you define in your conf.json file. This involves specifying a "DBColumn", "Entity", and "EntityColumn" which can be described as follows: DBColumn - the unique column that exists in your data source (i.e. where the records are being imported from) and is included in the database query Entity - the Hornbill entity where data is stored. Now this one is a bit Hornbilly. Basically its a round about way of referring to the table where the unique column exists in Hornbill. EntityColumn - specifies the unique identifier column from the Hornbill entity - i.e. the actual column name that exists in the Hornbill table - lets call it the "Hornbill identifier column" Below is an example from our wiki showing the configuration of the asset identifier for records being imported as Laptop-type assets (https://wiki.hornbill.com/index.php/Database_Asset_Import) { "AssetType": "Laptop", "Query": "xxxxxxxxxxxxxxx", "AssetIdentifier": { "DBColumn": "MachineName", "Entity": "Asset", "EntityColumn": "h_name" } }, I appreciate that the concept of an entity is something that is explained very often and can be the most common stumbling block if you're new to Hornbill. Here are the guidelines to follow when doing an asset import: If the unique Hornbill identifier column (EntityColumn) you're using exists in the table h_cmdb_assets, then the Entity will be "Assets" If the unique Hornbill identifier column exists in the table h_cmdb_assets_computer, then the Entity will be "AssetsComputer" If the unique Hornbill identifier column exists in the table h_cmdb_assets_computer_peripheral then the Entity will be "AssetsComputerPeripheral" If the unique Hornbill identifier column exists in the table h_cmdb_assets_mobile_device then the Entity will be "AssetsMobileDevice" If the unique Hornbill identifier column exists in the table h_cmdb_assets_network_device then the Entity will be "AssetsNetworkDevice" If the unique Hornbill identifier column exists in the table h_cmdb_assets_printer then the Entity will be "AssetsPrinter" If the unique Hornbill identifier column exists in the table h_cmdb_assets_software then the Entity will be "AssetsSoftware" If the unique Hornbill identifier column exists in the table h_cmdb_assets_telecoms then the Entity will be "AssetsTelecoms" So, assuming the entity and Hornbill identifier column are specified correctly, if new records are still being created unexpectedly, I'd check the value contained in your unique DB column in your source data. perhaps this has changed and therefore no longer matches any record previously imported. I hope that helps, Dan
  5. For the benefit of the broader community and to conclude this thread, this error can be caused when the userID referencing the manager of the Co-worker contains non-alphanumeric characters i.e. this error is not necessarily representative of an issue with the userID of this Co-worker, but the userID stored in the column h_manager in the user profile (table: h_sys_user_profiles). When a Co-Worker Profile is loaded, it tries to retrieve the manager handle (full name) so it can display it in the profile you're viewing. It does this based on the ID stored in the column h_manager which exists in the table h_sys_user_profiles (the table which stores all the user profile properties such as personal interests and qualifications). If you're receiving this error, its likely due to some incorrect configuration in the LDAP user import (https://wiki.hornbill.com/index.php/LDAP_User_Import) relating to the manager lookup, specifically that a "Manager Regex" has not been specified. The standard manager reqex which satisfies the majority of situations is CN=(.*?)(?:,[A-Z]+=|$) , ensure this is set in the Manager lookup section of the appropriate user import configuration.
  6. Thanks for the clarification. I'm not entirely convinced (yet) that the "Failed to Create User..." error is related to the failure to set the account status. I believe that where the utility is concerned, the creation of the user account and the setting of the status are separate actions (If you inspect the userCreate method call, account status does not feature in its inputs - https://api.hornbill.com/docs/admin/?op=userCreate). First the utility will create the account via "userCreate", and then make additional method calls (such as userProfileSet, userSetAccountStatus, etc,) to complete the exercise. The question is, is it trying the "userCreate" blindly, rather than checking for an existing user id first, and then moving onto the additional calls after as normal, or not bothering with the additional calls at all? Anyway, that aspect may be worth raising with Hornbill Support but before heading in that direction, are you able to confirm the version of the utility you're currently using? A few bells began to ring while typing the above (but now I've written it I'll leave it be ), and just checking the release notes I can see that prior to version 3.03, the user account status section was completely ignored when processing a user account. Versions of 3.03 or later have the fix. Thanks, Dan
  7. I'm not sure I fully understand the reference to "Pre-load", but before we talk about that, are you able to confirm whether the user "taxxxxx.xxxxxkh" exists in Hornbill, along with the accounts current status?
  8. Due to how the imports operate, there will be users created on the first run of this configuration. This means that we need to cover for that scenario and ensure the status is set on creation of a user (as well as on update of any existing user). Upon creation of a user, I believe the default status will be "active".
  9. Hi Martyn, thanks for your post. In relation to the two points you mention: 1) The utility tries to create the user if it did not already exists, i.e. was disabled many years ago. This is one of the principles that all of Hornbill's user import tools operate on. If the import does not find a matching user id, it believes the user does not exist and so will create it. Therefore, with an import configuration intended to manage the archiving of users, there will be a certain amount of redundant accounts imported the first time it's run. Perhaps there's an opportunity to refine your LDAP filter criteria to only focus on more recently disabled user objects? 2) Fails to update existing user to archived as currently active. Error: "User already exists with account status: active" I would double check that the status is indeed set to "archived" and ensure the action is set to "Create and Update". On a general note, is there a reason why you maintain disabled user objects in your AD long term? I hope that helps. Dan
  10. The defect described in this thread was resolved and the fix contained in Service Manager build 1392 which was released to live on November 29th. Detail: Catalog item not visible to a Co-Worker when their team is excluded but the user is included.
  11. Hi Dan, I've encountered the need to set a "due date" on several occasions during my work with customers. Essentially the need to set a target based on a specific event that is influencing the delivery of the work (e.g. end of the financial year, an audit deadline, or a new member of staff starting) - which is what I would understand as being "fluid" in your description i.e. the due date could be quite a strict deadline (unlikely to move) but the factor influencing the deadline can variable for each piece of work. The immediate benefit of such a feature would allow easy sorting of the request list but in terms of the performance and service delivery improvement I'd be interested to know what type of reports or metrics would be useful to you in relation to a "due date". At the very least I would expect there is a need to compare the time at which the request was "marked" with the due date target, and then set a 1 or 0 accordingly - representing whether this was marked within the due date, or exceeded the due date. Would you be able to elaborate further in this area, or is the need essentially the same as reporting on Service-Level-based targets? I'd also speculate that there should be restrictions when it comes to editing any existing due date on a request. Who is involved in managing and setting of due dates in your scenario? The concept of a due date is already under discussion by the Product team and I understand you may already have contributed in some conversations but it would be great to hear a few thoughts on the latter points RE: reporting and governance around amending a due date. Thanks, Dan
  12. Hi Steffen, I've added the BPM definitions for you to explore at your leisure. These are based on the "EXAMPLE Hornbill Service Request Process" which is shipped out-of-the box. The first definition below utilises two human tasks: example-hornbill-service-request-process-incorporating-manual-authorisation.bpm.txt This second definition ("v2") replaces the first human activity with a BPM email notification which automatically looks up the line manager of the customer. This second approach is dependent on user-manager relationships existing in your instance. example-hornbill-service-request-process-incorporating-manual-authorisation-v2.bpm.txt I hope that helps, Dan
  13. Hi Steffen, to add to Stevens description above, here's how that would be achieved: If Line Manager information is being stored against a user (typically populated through the user import) there is scope to replace the first human task with an automated email to the line manager as follows: Of course you can adjust the "on-Hold" behaviour to suit your particular scenario(s) too. Dan
  14. During the recent Hornbill Academy Webinar, there was interest in how to create a widget that would display all requests logged Vs those that had been resolved each day. This post illustrates how this can be done using Hornbill Advanced Analytics. When the sample period is of importance (daily, weekly, monthly etc.), a measure is typically the place to go. In essence, what we are looking to do in this example is create two measures and display them in the same widget. Creating the measures The first measure we are going to create is “No. tickets logged per day”. You will need to input the frequency of the measure as daily and ensure you are pulling data from the h_itsm_requests table. Our Date ranging column will need to contain h_datelogged as the measure needs to put the records into the right sample period based on when things were logged. The query where clause will be used to ensure we are only considering records we are interested in. This could simply be h_requesettype=”service request”. The second measure will be used to obtain the “No. tickets resolved per day”. In this case, we will use h_dateresolved in the Date Ranging column. In the Query Where Clause, you may want to use a specific statement to also ensure that only resolved or closed requests are included in the measure. If you don't wish to define this in your statement, the measure will extract all requests that contain a Date Resolved timestamp, regardless of their current status. For example, without definition, the measure could include any request that has been resolved and subsequently reopened, as this would contain a historical Date-Resolved timestamp. When creating your measure, it is important to identify what information you would like to extract from the system and define your Query Where Clause accordingly. Building the Widget Now that we have these two measures, we can now create our widget. A chart-type widget will be suitable for our needs and as we are going to be displaying information gathered by a measure, and we are interested in the sample period, our data type will be "Measured Samples". Multiple measures can be displayed by adding a new series. In our case, our first series will represent our “Requests logged” data (taken from our “No. tickets logged per day” measure) and our second will represent “Requests Resolved” (taken from “No. tickets resolved per day”). Does it Add Value? This widget provides three pieces of information, the volume of requests logged per day, the volume of requests resolved per day, and as these are being displayed together you can potentially get a rough idea of how many active calls you would typically expect to have at any one time. I hope that helps, Dan
  15. During the recent Hornbill Academy Webinar, there was interest in how to create a widget that would display all requests that were due to breach within the next hour. This post illustrates how this can be done using Hornbill Advanced Analytics. In this example, I'll focus on a resolution (fix) breach but the same approach could be applied to a response breach by amending the database column the widget is focusing on. The fix target is stored as a timestamp in h_itsm_requests.h_fixby and the response target can be found in h_itsm_requests.h_respondby. The Hornbill Application Entity Viewer can be used to explore all the tables in the Hornbill database further https://wiki.hornbill.com/index.php/Application_Entity_Viewer or as an alternative there is a h_itsm_requests quick reference available on the hornbill wiki: https://wiki.hornbill.com/index.php/Reporting Grabbing the Records: When starting out with any metric-building, its important to be clear in what records we want to see and how we are going to extract these. The requirement here is to identify a list of times that fall into the period of one-hour from NOW ("NOW" being whenever the widget refreshes). So we're working with a time period in the future and a time period that is also continually "moving" i.e. the datum is not fixed. In this situation, I turn to my faithful reference: https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html and end up with the following where clause: h_fixby BETWEEN NOW() AND DATE_ADD(NOW(),INTERVAL 1 HOUR) BETWEEN - allows me to set two boundaries NOW() - allows me to grab the current date/time in the form "YYYY-MM-DD HH:MM:SS" which will be used as my first boundary DATE_ ADD - is going to give me the ability to add a time value (interval) to a date. In this case I need NOW + 1 to use as my second boundary. The result being I get all records from h_itsm_requests where the h_fixby timestamp lies between NOW and NOW + 1. Displaying the Result If I wanted this information on a dashboard, I'd probably go for the Widget that can show me a list of data and for maximum flexibility use the custom sql query option. The full query would be as follows: select h_pk_reference AS Reference, h_fixby AS Resolve_By from h_itsm_requests WHERE h_fixby BETWEEN NOW() AND DATE_ADD(NOW(),INTERVAL 1 HOUR) - See image below. Does it add value? On a final note, in keeping with the Academy Webinar, we should ask if this Widget is going to add value. In my opinion, looking at what's going to breach in the next hour doesn't give much room to react, in fact it probably creates anxiety and unnecessary pressure which could impact performance and morale. You may want to consider a widget that will display what is due to breach tomorrow. Looking further ahead will perhaps give more chance to get on top of the situation. If you're finding that the desk is swamped by breaches, I'd ask whether your Service Level targets are realistic given the manpower, type of incident/request being dealt with, or quality of service you're aiming to provide the customer.
×
×
  • Create New...