Jump to content

SamS

Hornbill Developer
  • Posts

    1,511
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by SamS

  1. Hi @Izu, Can you confirm that you are NOT using a proxy? and that the following URLs are whitelisted? Could you please confirm that there is no proxy in use(*). Could you please confirm that there is no Firewall limiting outbound traffic over https (admittedly unlikely, but the firewall might have a white-list(**) of which applications are allowed to make the connection). Could you please confirm that there is no Virusscanner which might be blocking ports/traffic (even less likely). Can you confirm that https://eurapi.hornbill.com/INSTANCENAME/xmlmc/ opens to page within the browser (the browser would likely use the proxy, so a no-show of the page would only mean that there is most definitely something misconfigured on the network side). (*) IF you are using a proxy, please double-check that you have implemented the section under HTTP Proxies : https://wiki.hornbill.com/index.php/Hornbill_Data_Export#HTTP_Proxies (**) The URLS to white list are as follows: https://files.hornbill.com/instances/INSTANCENAME/zoneinfo - Allows access to lookup your Instance API Endpoint https://files.hornbill.co/instances/INSTANCENAME/zoneinfo - Backup URL for when files.hornbill.com is unavailable https://eurapi.hornbill.com/INSTANCENAME/xmlmc/ - This is your Instance API Endpoint, eurapi can change so you should use the endpoint defined in the previous URL In ADDITION to this, please visit https://files.hornbill.com/instances/INSTANCENAME/zoneinfo and note the "endpoint" in that result. THAT resulting URL ALSO needs to be white listed.
  2. Hi @nasimg, I haven't found any API documentation (API = Application Programmers Interface; the documentation which outlines how a programmer can talk to the program in question) for Enboarder - if you know of a link, then please share and we will have a look.
  3. Hi @Paul Alexander, I think I know what's up. The report export encapsulates the tasks in double-quotes. If you open the .csv in a text editor and search and replace all double-quotes, you will be OK.
  4. Hi @Paul Alexander, Is there a significant difference (in the .csv file) between the cancelled tasks and those not cancelled (eg missing TSK prefix or so)? You said it was 50-50 success. Was it every second task in the list which worked/not worked? The error suggests that the task does not exist, so obvious question, do the remaining tasks still exist?
  5. Hi @Paul Alexander, We've created the following utility to cancel tasks - https://wiki.hornbill.com/index.php/Task_Cancelation_Utility for those tasks which were created prior to the system setting change.
  6. Hi @AndyHill, The Attribute... bit allows one to configure the organisational units one can assign the person to (this assumes that the organisational unit (eg Org) is already created in Hornbill). Instead of {{.companyName}} you can use any other Azure attribute. If the value you wish to use as the organisation is already mentioned in one of the attributes, then you can simple substitute .companyName for the name of the Azure attribute. As an alternative, you can hard-code the group name and, with that, just create a new configuration file PER GROUP, so: "UsersByGroupID":[ { "ObjectID":"Group Object ID", "Name":"Specific Group Object Name" } ] would be matched with: { "Attribute":"Specific Group Object Name", "Type":5, "Membership":"member", "TasksView":false, "TasksAction":false }
  7. Hi @jeffgleed, It appears that you have accidentally copied the request XML instead of the response XML. If your result is the following, then no results are being returned: <methodCallResult status="ok"> <flowCodeDebugState> <step>...</step> <executionId>...</executionId> </flowCodeDebugState> </methodCallResult> If you want to verify this, then just drop the "customerEquals" element to return 100 rows. If you have a permissions error, you might need to ensure your system administrator account has "Service Manager" and "Service Desk Admin" Security Roles.
  8. Hi @samwoo, I've updated (and released the binary to) the CSV Asset Import script with the fix. The owner of the DB Asset Import script has a notification waiting with the fix in it as well. Expect a release early in the new year. IF I manage to compile that particular binary myself, then I will provide you with that binary via sFTP.
  9. Hi @samwoo, A couple of the user ID fields (owned_by and used_by) are converted, behind-the-scenes, to a more involved unique identifier. This is not the case for last_logged_on_user, but then I can't find anywhere that this should be the case. Could you please run the following SQL in the Admin section Home > System > Data > Data Direct : SELECT h_pk_asset_id, h_last_logged_on_user FROM h_cmdb_assets_computer WHERE h_last_logged_on_user IS NOT NULL and see what data is in the results.
  10. Hi @Joanne, The "Could not get Asset Class..."-error could be because the "Server" Asset Type does not exist (i.e. no results in the query would create that error). Could you please confirm whether "Server" is set up? That being said, the preceding (protocol) error might point to something else (it could also be connected).
  11. Hi @Nick Brailsford, From the little "Query" bit I see, I suspect you are importing date from ITHD/ITSMF - where the asset data is stored in the "equipmnt"-table. In this case, the primary ID/unique identifier would be "equipid", so that would be the setting in "DBColumn" (not "h_name").
  12. Hi @Darren KIng, A new release incorporates the fix for this: https://github.com/hornbill/goDb2HUserImport/releases/tag/1.2.3
  13. Hi @Darren KIng, We tested the original executable with 20.000+ entries and it got well beyond the 100th user your log file suggests it ended on. That being said, we did find a place to optimize it and have released it (as version 1.2.2) on github [ https://github.com/hornbill/goDb2HUserImport/releases ]. I have also updated the wiki instructions to emphasize the SQL best practice to specify fields in the SQL SELECT statement
  14. Hi @Darren KIng, Though admittedly, I haven't tested the script with more than 10.000 entries, I would first be wondering what is returned with your query results. In your example, you are running SELECT * on a view, this could potentially bring in more results that actually mapped by yourself. Some of those extra fields might be longvarchar and/or blob fields which would use a lot of RAM to store. IF they are not being mapped, then I would ensure those are not included in the results. Could you please run SELECT for each field you are actually using - or confirm to me here that the result does not contian many more (longvarchar/blob) fields ? eg: SELECT objectID, siteType, ... FROM view_#####
  15. That gets my "Thumb's up"
  16. Hi Lyonel, It mostly depends on WHAT information you want to get out: Option 1 will not, I think, give you a sensible number; it compares calls logged in a period with an amount of calls breached within that same period. Please note that some of these breached calls might have been logged before the period. The percentage generated would imply, but is not, the percentage of calls meeting their SLA logged within the period. Option 2 is where my vote would go for %breach (one is comparing resolved calls). Alternatively, one can report on percentage of breached calls currently open. One can also report on: out of the amount of calls logged within this period, how many have breached (and transform it into a percentage). Please note that calls could still be open and (about to) breach. OR: out of the amount of calls logged within this period, how many of the resolved calls have breached. Needless to say, this percentage will vary over time as more calls from that period get closed. The other thing to keep in mind is the various SLAs. Tallying P1's and P3's in the same heap might give a very skewed picture: eg 99% meeting of SLA could hide 100% missing of P1's Anyhow, I hope I have given you some food for thought. Sam
×
×
  • Create New...