Jump to content


Hornbill Developer
  • Content Count

  • Joined

  • Last visited

  • Days Won


SamS last won the day on June 12

SamS had the most liked content!

Community Reputation

88 Excellent

About SamS

  • Rank
    Senior Member

Profile Information

  • Gender
  • Location
    Hornbill Offices

Recent Profile Visitors

888 profile views
  1. Hi @Paul Alexander, I think I know what's up. The report export encapsulates the tasks in double-quotes. If you open the .csv in a text editor and search and replace all double-quotes, you will be OK.
  2. Hi @Paul Alexander, Is there a significant difference (in the .csv file) between the cancelled tasks and those not cancelled (eg missing TSK prefix or so)? You said it was 50-50 success. Was it every second task in the list which worked/not worked? The error suggests that the task does not exist, so obvious question, do the remaining tasks still exist?
  3. Hi @Paul Alexander, We've created the following utility to cancel tasks - https://wiki.hornbill.com/index.php/Task_Cancelation_Utility for those tasks which were created prior to the system setting change.
  4. Hi @AndyHill, The Attribute... bit allows one to configure the organisational units one can assign the person to (this assumes that the organisational unit (eg Org) is already created in Hornbill). Instead of {{.companyName}} you can use any other Azure attribute. If the value you wish to use as the organisation is already mentioned in one of the attributes, then you can simple substitute .companyName for the name of the Azure attribute. As an alternative, you can hard-code the group name and, with that, just create a new configuration file PER GROUP, so: "UsersByGroupID":[ { "ObjectID":"Group Object ID", "Name":"Specific Group Object Name" } ] would be matched with: { "Attribute":"Specific Group Object Name", "Type":5, "Membership":"member", "TasksView":false, "TasksAction":false }
  5. Hi @jeffgleed, It appears that you have accidentally copied the request XML instead of the response XML. If your result is the following, then no results are being returned: <methodCallResult status="ok"> <flowCodeDebugState> <step>...</step> <executionId>...</executionId> </flowCodeDebugState> </methodCallResult> If you want to verify this, then just drop the "customerEquals" element to return 100 rows. If you have a permissions error, you might need to ensure your system administrator account has "Service Manager" and "Service Desk Admin" Security Roles.
  6. Hi @samwoo, I've updated (and released the binary to) the CSV Asset Import script with the fix. The owner of the DB Asset Import script has a notification waiting with the fix in it as well. Expect a release early in the new year. IF I manage to compile that particular binary myself, then I will provide you with that binary via sFTP.
  7. Hi @samwoo, A couple of the user ID fields (owned_by and used_by) are converted, behind-the-scenes, to a more involved unique identifier. This is not the case for last_logged_on_user, but then I can't find anywhere that this should be the case. Could you please run the following SQL in the Admin section Home > System > Data > Data Direct : SELECT h_pk_asset_id, h_last_logged_on_user FROM h_cmdb_assets_computer WHERE h_last_logged_on_user IS NOT NULL and see what data is in the results.
  8. Hi @Joanne, The "Could not get Asset Class..."-error could be because the "Server" Asset Type does not exist (i.e. no results in the query would create that error). Could you please confirm whether "Server" is set up? That being said, the preceding (protocol) error might point to something else (it could also be connected).
  9. Hi @Nick Brailsford, From the little "Query" bit I see, I suspect you are importing date from ITHD/ITSMF - where the asset data is stored in the "equipmnt"-table. In this case, the primary ID/unique identifier would be "equipid", so that would be the setting in "DBColumn" (not "h_name").
  10. Hi @Darren KIng, A new release incorporates the fix for this: https://github.com/hornbill/goDb2HUserImport/releases/tag/1.2.3
  11. Hi @Darren KIng, We tested the original executable with 20.000+ entries and it got well beyond the 100th user your log file suggests it ended on. That being said, we did find a place to optimize it and have released it (as version 1.2.2) on github [ https://github.com/hornbill/goDb2HUserImport/releases ]. I have also updated the wiki instructions to emphasize the SQL best practice to specify fields in the SQL SELECT statement
  12. Hi @Darren KIng, Though admittedly, I haven't tested the script with more than 10.000 entries, I would first be wondering what is returned with your query results. In your example, you are running SELECT * on a view, this could potentially bring in more results that actually mapped by yourself. Some of those extra fields might be longvarchar and/or blob fields which would use a lot of RAM to store. IF they are not being mapped, then I would ensure those are not included in the results. Could you please run SELECT for each field you are actually using - or confirm to me here that the result does not contian many more (longvarchar/blob) fields ? eg: SELECT objectID, siteType, ... FROM view_#####
  13. That gets my "Thumb's up"
  14. Hi Lyonel, It mostly depends on WHAT information you want to get out: Option 1 will not, I think, give you a sensible number; it compares calls logged in a period with an amount of calls breached within that same period. Please note that some of these breached calls might have been logged before the period. The percentage generated would imply, but is not, the percentage of calls meeting their SLA logged within the period. Option 2 is where my vote would go for %breach (one is comparing resolved calls). Alternatively, one can report on percentage of breached calls currently open. One can also report on: out of the amount of calls logged within this period, how many have breached (and transform it into a percentage). Please note that calls could still be open and (about to) breach. OR: out of the amount of calls logged within this period, how many of the resolved calls have breached. Needless to say, this percentage will vary over time as more calls from that period get closed. The other thing to keep in mind is the various SLAs. Tallying P1's and P3's in the same heap might give a very skewed picture: eg 99% meeting of SLA could hide 100% missing of P1's Anyhow, I hope I have given you some food for thought. Sam
  • Create New...