Jump to content

Search the Community

Showing results for tags 'log'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Hornbill Platform and Applications
    • OpenForWork
    • Announcements
    • Blog Article Discussions
    • General Non-Product Discussions
    • Application Beta Program
    • Collaboration
    • Employee Portal
    • Service Manager
    • IT Operations Management
    • Project Manager
    • Supplier Manager
    • Customer Manager
    • Document Manager
    • Timesheet Manager
    • Live Chat
    • Board Manager
    • Mobile Apps
    • System Administration
    • Integration Connectors, API & Webhooks
    • Performance Analytics
    • Hornbill Switch On & Implementation Questions
  • About the Forum
    • Announcements
    • Suggestions and Feedback
    • Problems and Questions
  • Gamers Club's Games
  • Gamers Club's LFT

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Organisation


Location


Interests


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype

Found 8 results

  1. We are just testing implementing a LogIncident and LogServiceRequest API calls. These are api calls are successful and the results returns the RequestId, summary and an empty warning array. We are including the ServiceId and CatalogId in the requests which is showing correctly when we view the request in the live user application. However a linked BPM process is not being spawned when to make these requests, even though the documentation states it will be when the providing the ServiceId. We have also tried the LogRequestBPM node afterwards but this returns the 'defaultProcessNotSet'. We do not set a default BPM process at the Service Level as we have the system set to enforce the selection of the catalog. I wondering if the api documentation needs to be updated when using the combination of ServiceId and CatalogId? We are not passing a bpmName, as we are expecting the provision of the ServiceId and CatalogId to used to determine this rather than having to hard code this or make an earlier call to get the bpmname from the service details. Cheers Martyn
  2. As confirmed under linked post below. the current API log request nodes only spawns a BPM if you specify the the serviceId and have a default BPM set at the service level. With the development of the service catalog and catalog items are now available on all current request types to allow use of different BPM workflows, can the API call logic be enhanced to apply the same capability that it offers for 'Service' level requests, so that the catalog specific BPM workflow is spawned when passing both the serviceId and catalogId as parameters. At the moment you have to undertake an additional request first of to obtain the current BPM name from the catalog before being able to call the Log api endpoint as you need to pass the BPM Name, which is not efficient. Similarly you would not want to hard code the BPM Name in your application. Cheers Martyn
  3. Is there a way of tracking what happened to an email in the system? We have the email in our 'deleted' folder but we have an automatic rule for logging these ones but it seems to have slipped through the net and wondered if we could work out why?
  4. Could the SQL Importer be extended to give a reason/warning in the log when skipping or failing to process a contact record. This way it would be easier to locate those updates which have not been able to be applied. The only way to locate them at the moment is page through the log file and look for the single line. Also, when the tool errors can the text just be a normal colour rather than red on black which is quite hard to read. I have tried changing the command prompt to change the default background colour, but to no avail as the tool sets both the background and foreground colours for the error message output. Cheers Martyn
  5. Can you confirm which Log File the Routing Rules is recorded into? We are in the process of setting up new rules, so it would be good to be able to see where the rules are trigger or not. Cheers Martyn
  6. In the Admin tool under Monitor>Log Files, would it be possible for the filters selected at the top of the screen to be applied to the log download option, i.e. so it is possible to say to export just the security and critical ones? Also, would it be possible to have the option to expand the whole log file when you have filters applied in the UI, rather than having to repeatedly click on load more? Cheers Martyn
  7. Morning, I have a couple of questions regarding log files: - What is the lifespan of a single log file? - Where can I access a previous log file? When I use the log file viewer, it only searches within the current log file it seems. Indeed, when I download the log file I can see on the first row a reference to where is stored the previous log file. But the web screen does not access it.
  8. Does the system record user logins to an audit table, or is there just the last authentication date stamp against the user's record? We implemented this in SupportWorks using VPME on the event to populate a table with a history of user logon data stamps, so looking at a way of acheiving a similar outcome in the platform. Cheers Martyn
×
×
  • Create New...