Jump to content

Steve G

Hornbill Developer
  • Posts

    746
  • Joined

  • Last visited

  • Days Won

    30

Everything posted by Steve G

  1. Hi @AndyGilly, This error would suggest that the value provided to the SiteID input parameter isn't a valid, or accessible, ECM site. The operation basically uses the Set-Location cmdlet to set the execution location to the ECM site drive using the provided site ID, then uses the Add-CMDeviceCollectionDirectMembershipRule cmdlet (alongside a couple of other bits) to add the supplied device to the collection. The SiteID will be case-sensitive too, so it may be worth checking this first. Cheers, Steve
  2. Hi @Martyn Houghton, You could use the data::entityBrowseRecords2 API, pointing at the Contact entity in the com.hornbill.core application, for example: <methodCall service="data" method="entityBrowseRecords2"> <params> <application>com.hornbill.core</application> <entity>Contact</entity> <searchFilter> <column>h_email_1</column> <value>some.email@address.com</value> <matchType>exact</matchType> </searchFilter> <searchFilter> <column>h_contact_status</column> <value>0</value> <matchType>exact</matchType> </searchFilter> </params> </methodCall> Would return the contact wiith the defined email address in the h_email_1 column, with a status of 0 (active). Cheers, Steve
  3. Hi @Ann, You could always access the CSV file using the Database Asset Relationship Import tool via an ODBC data source, and write the query as appropriate to your data. See the DBConf details on the wiki page you've linked above for more information: Thanks, Steve
  4. Hi @JoanneG, The errors are rights-based, so the user who owns the API key you are using doesn't have the correct roles to perform the operations, specifically this one: If you apply this role and try again, then this should be fine. Cheers, Steve
  5. Hi @samwoo, Yes, this will likely require a fair amount of work to achieve - I expect we'll need to cache all asset and extended information records from Hornbill into memory tool-side, then check each of the assets being imported against that cache. It's been on the to-do list to add this feature for a while though... I'll take a more detailed look in the new year and get back to you. Cheers, Steve
  6. Hi @Jeremy, Apologies for the late response on this. There's a utility node to escape strings ready to be inserted into JSON, this is what you need to use on the field in question before using the result of this node in your JSON: So without pushing the description through this node, and adding it in its vanilla state to the JSON, your output is (as you have seen): But preparing the string with the workflow utility before adding it to your JSON, and you get the desired result: Let me know if this helps. Cheers, Steve
  7. Hi @JoanneG, The decoding error is due to this erroneous comma in your config: Removing this comma would mean that just Change Request CH00006679 is deleted. Cheers, Steve
  8. Hi @samwoo, I didn't notice the Create/Update/Both request when doing the PreserveState etc feature, so I've added that into the tool, v1.12.0 contains that feature https://github.com/hornbill/goDBAssetImport/releases/tag/v1.12.0 Check out the OperationType property of the AssetTypes objects, you can set them to Both, Create or Update. Leaving the parameter blank (or indeed not even including it) will perform the previous logic (so Both ). Cheers, Steve
  9. Hi @samwoo, I've just released v1.11.0 of the DB Asset Import Tool, that supports preserving Status/SubStatus/Operational Status values for updating assets. Can be downloaded from here: https://github.com/hornbill/goDBAssetImport/releases/latest Check out the new PreserveState, PreserveSubState and PreserveOperationalState options in the AssetTypes definitions. Setting the Status to Active when assets are inserted is alredy possible, you just need to add: "h_record_state": "1", To the AssetGenericFieldMapping. Cheers, Steve
  10. Hi @Frank Reay, As assets are audited, this can be done with a report against the audit table. I've attached an example report definition, which can be imported into your Service Manager reporting in the admin console. Cheers, Stevedeleted-assets.report.txt
  11. Hi @AndyGilly, That's done, give it 5 mins and you should see the change on your instance. Note, rather than using the Could Automations to interact with Service Manager on your instance, you could use the Hornbill Automations to log new requests - which means you no longer need to provide an API Key in keysafe: Cheers, Steve
  12. Hi @Michael Risby, This was indeed a defect, which we've just fixed and released. Try it again on your instance in 5 mins or so and you should get the requested content type passed to your external endpoint. Let me know how you get on. Cheers, Steve
  13. No worries @AndyGilly. Nope, the process is all automatic, it can just take a few mins for the fix to be delivered. The next time you open your workflow/runbook designer, it should just work. Cheers, Steve
  14. Hi @AndyGilly, Apologies for any inconvenience caused, but this issue is now fixed and will be available on your instance in the next few minutes. Cheers, Steve
  15. Hi @m.vandun, This message just means that the report is already being run on your Hornbill instance, either from the admin tool or a script that has executed it. You can check the running status of a report in the admin console - it's on the Status column in the Report History tab. You (or your automated scripts) will need to wait until the current report run is complete before firing off another request. Cheers, Steve
  16. Hi @AndyHill, This is already possible using ITOM Runbooks. Cheers, Steve
  17. Hi @JamieMews, This is a known issue with reporting in the platform, which has been fixed and is awaiting release. As a workaround, we've updated the R scripts (and the Python ones too) so that they now support the use of XLSX report output. Please see the following post for details of the R updates: Kind regards, Steve
  18. @Jeremy Strange, can you PM me a copy of your workflow so I can take a look please?
  19. @Jeremy The easiest way to check if this is an issue with your endpoint or Hornbill sending the requests would be to setup your own RequestBin (I just used a free public one in the above test), point your HTTP Request node at that, leaving the headers and body as they are and just replace the URL, then check to see if the request is received when your workflow sends it. This should prove that the node and workflow are working, and that there may be an issue with your endpoint. Thanks, Steve
  20. Hi @Jeremy, The source IP should be 87.117.243.10. Cheers, Steve
  21. Hi @StephC, I've just released version 1.6.0 of the Hornbill Data Export Tool, which supports the use of the XLSX files from reports, as well as the existing CSV file support. It can be downloaded from Github, and is documented on the Hornbill Wiki. Specifically, look for UseXLSX in the Configuration section, and make sure you switch XLSX on in the report config > Output Formats > Additional Data Formats: Cheers, Steve
  22. Hi @Joyce, That's likely due to there being a UTF8 character stream in the report output that sits outside of the Unicode codepage - Power BI doesn't play nicely with all UTF8 characters (unlike pretty much every other system!). I've added some extra code into the script to do the conversion to allow Power BI to import the data, v1.8.0 is on Github now. Note: as per the instructions, you will need to change the csvEncoding variable to fix this issue, a value of ISO-8859-1 will probably do it. Cheers, Steve
  23. For anyone who is experiencing this issue, and are using the R scripts to pull data from Hornbill Reporting and push into Power BI, please see the attached post for an immediate workaround. Thanks, Steve
  24. @m.vandun @chriscorcoran @Paul Alexander @Will J Douglas Afternoon all, I've just released v1.7.0 of the R scripts - they now support using the XLSX output of the reports (as well as the existing CSV output), so you can get your Power BI reports back up and running now instead of waiting on the platform fix to the CSV issue (which is actually done and will be in the next platform update). Hope this helps, Steve
  25. Good afternoon, We've just released v1.7.0 of the R scripts that are used to pull data from Hornbill Reporting into Power BI. The Report and HistoricReport scripts now support the use of XLSX files in the output of Hornbill reports, as well as the existing CSV options. The updated scripts can be downloaded now from Github. As always, please review the wiki documentation for the changes, but as a very basic overview: XLSX output needs to be enabled against your target report configuration: useXLSX needs to be set to TRUE in the scripts for them to target the XLSX file in the report output rather than the CSV file; When useXLSX is TRUE, the readxl package is required to be installed on the machine running the scripts. Thanks, Steve
×
×
  • Create New...