Jump to content

SamS

Hornbill Developer
  • Posts

    1,528
  • Joined

  • Last visited

  • Days Won

    26

SamS last won the day on May 30

SamS had the most liked content!

Profile Information

  • Location
    Hornbill Offices

Recent Profile Visitors

2,275 profile views

SamS's Achievements

Veteran

Veteran (13/14)

  • Reacting Well
  • Very Popular Rare
  • Dedicated Rare
  • Posting Machine Rare
  • Collaborator

Recent Badges

145

Reputation

  1. @Minh Nguyen, I notice that you are playing with single and double quoted strings for the JSON String that you are passing in generalProperties. I would recommend sticking with double-quotes https://www.json.org/json-en.html - and that would mean escaping double-quotes within the string.
  2. @Minh Nguyen, I am pretty sure that generalProperties of "State" and "Functional Availability" do NOT exist. The names will be all camelCaps, starting lowercased and certainly without any spaces. With "Functional Availability" you likely mean "availabilityState" and "N/A" would be "1" as the value you would want to set it to. Neither of these mistakes should have caused an error about the API key, though. So I can't help you there.
  3. @danec, Just ensure AutoProvisioning is disabled to prevent the creation of accounts. There is no way to set up priorities of SSOs, so the customer will have to select the one (s)he thinks is the relevant one, and, if unsuccessful, try the other instead.
  4. @Giuseppe Iannacone, depending on your specific requirements, two or more import jobs are totally possible. The import utilities don't need to be on the same machine, nor point at the same data source. We have customers where there are various sources of truth - eg one for (mobile) phone numbers, one for physical location (eg room 12) - with matching fields email address or UPN or such. IF you want to know what would work for yourselves, you can always arrange a consultancy session.
  5. @Giuseppe Iannacone, I would play around with using the "Action" of "Update" (instead of "Both") and figure out how and when you move across. Please note that one does NOT NEED to use the User ID to match users on - in our LDAP/Azure/DB User import utilities.
  6. @danec, I would advise you to turn off AutoProvisioning (to disable the creation of the new accounts) and set up a Second SSO profile. Your customers will then need to select the correct "identity provider" before logging in - but hopefully you will have communicated to them which of the two to select - or, indeed, the simple instruction: "if "ABC" identity provide doesn't work for you, please try "XYZ" and know that we have already migrated your account."
  7. Hi @Berto2002, You'd be relying on a reverse DNS lookup - which wouldn't be really efficient (as we'd need to do it on all inbound traffic) nor, more importantly, reliable. IP addresses are more reliable - and also nicely listed in the example you pointed us to. That list of IP ranges can be used in the API rules (afaik).
  8. Two files with the same name, creation date and file size does not necessarily make them the same file. Out of those three, filesize is probably the most useful, but still extremely unreliable. There are a few other ways to determine whether two files are the exactly same. Within Hornbill, two files with different names but the same content will be recognised as the same file and thus only stored once (though you would still see multiple entries in your itsm_requests_attachments table). With my "50 files" example, I did actually mean 50 "exactly the same contents" files. Whether they are all attached to the same request or spread among multiple requests, make no difference to the diskspace usage.
  9. Hi @will.good, As described in the first "Important" notification box within this wiki article, the way disk usage is optimized by Hornbill does not work as one might initially expect. In your calculations, 50 files of 2MB would take up 100MB of disk space. However, within Hornbill's model, only 2MB of disk space is used. In your searches through our data, you have accessed the table used to show file information on attachments listed against the requests, not the table that tracks actual disk space usage (i.e., the one that keeps track of how many references there are to one file). The latter table is modified when the archiver utility possibly removes the file (i.e., the file is not removed if there is still something linking to it). The "Your Usage" section under Hornbill Solution Center has recently been overhauled. This interface now provides a better look into where the disk space is used, and I would refer you to that instead of Database Direct. Regarding your suggestion to disable the hyperlink, there is some merit. However, the file might still exist (i.e., in another request), and it would require modifications to both Service Manager and the utility. The timelines would be quite extended on this, as Service Manager would need the update first before the utility can be updated to contain code to set the new field.
  10. Hi @Sahana. Shenoy, For all intents and purposes, you have configured the options in the AutoTask "copy" mechanism correctly. It is the Workflow being spawned from the creation of the new Incident (sorry, I said SR above) which we are suspecting here of setting the summary field to something based on captures. Specifically because the summary appears to exist for a little time and then suddenly changes. If you post the Incident workflow being triggered we/someone could have a look, or you could get in touch via the portal to use a little credit (please provide the link to this forum entry) for us to have a look and find the offending bit of workflow for you (or indeed, if we don't find the culprit, be gobsmacked).
  11. Hi @Sahana. Shenoy, Could it be that the BP which gets triggered on the creation of that SR (assuming that that is set up) tries to populate the data in the summary field with data from a (now non-existent) progressive capture or so?
  12. Hi @Dan Brown, Normally HTML code can be used to enhance the Teams message sent. That being said, adding tags/mentions is a little more involved (see: https://techcommunity.microsoft.com/t5/teams-developer/can-we-create-tag-inside-team-using-graph-api/m-p/1200381 (*)) which we are currently NOT catering for in our use of the API. (*) The payload being sent to Teams wouldn't just be the (HTML) message, but also placeholders and definitions for each of the mentions.
  13. Hi @Gareth Swallow, Yes, you can create your own derivative (Custom) report by adding the necessary filters:
  14. Hi @Prem Prakash gautam, One solution would be to have a local data store (Database MS SQL/MySQL) and use the https://github.com/hornbill/goHornbillDataExport to seed that database (this feeds off the same reports that you have already created - i.e. there is also the same 25k limit). The reporting can then be done from that local data store. The data export tool can THEN be run on a schedule to "top up" the local data store on a regular basis. Depending on what you are wanting to report on, I would also urge you to think about your local datastore as more than a single "requests" table. You can create more tables to hold different sets of data - eg some tables could hold aggregate data (eg amount of requests logged per month; so the reporting off that could made be simpler).
  15. Hi @will.good, CSV falls under Database in this instance. The connection goes via ODBC. As stated, native CSV is on the "ToDo"-list to add, but not anytime in the near future.
×
×
  • Create New...