Jump to content

Gerry

Root Admin
  • Posts

    2,437
  • Joined

  • Last visited

  • Days Won

    172

Everything posted by Gerry

  1. Following on from the above errors and by way of an explanation. Over the last few days/weeks, Microsoft have been updating their servers which changed the priority of TLS version negotiation as part of the key exchange process, and our server was not handling the case where the negotiation was declared as TLS1.3 and that key exchange negotiation failed, there was no fall-back to TLS1.2 which there should have been. So no fall-back and a failing TLS1.3 key exchange negotiation. This was a problem in Hornbill's code, triggered by changes being made by Microsoft. Our initial thought was this was a Microsoft issue, given that our other independent test points on systems like GMail, Postfix and others were all fine. A work-around was applied to our code and provided as a hot-fix, this hot fix forced the negotiation down to TLS1.2. We since applied an update to one of the third party libraries we use after their feedback has confirmed the problem is now resolved without the forced downgrade to TLS1.2, that will be in the next platform update. We apologise for any inconvenience caused, and we appreciate just how important Hornbill is to our customers day to day operations. Unfortunately when working with 3rd-party systems, changes do happen and from time to time that are not within our control and things can get broken. Hopefully we continue to demonstrate that we are able to respond to these kind of events and resolve the issues in a timely manor. And for those that want some more technical details, this is the technical bit... ------- The issue related to TLS 1.3 ClientHello w/ the "key_share" extension. The initial ClientHello message can include pre-generated key shares. While it is not computationally intensive to generate an x25519 key share, and virtually all servers support x25519, it is far more computationally expensive to generate an secp256r1 key share. If our server generated it, then for applications that requires high-performance w/ many connections, the extra work of generating the secp256r1 key share would be significant. Therefore, Hornbill SMTP implementation just pre-generates the x25519 key. If that key is not supported, the server will simply respond with a HelloRetryRequest, in which case Hornbill sends a new ClientHello with the secp256r1 key_share, or whatever other key shares are needed. Microsoft at some point seems to have stopped supporting x25519, or maybe changed something with the HelloRetryRequest request, such that the HelloRetryRequest failed after a failed negotiation attempt for a TLS1.3 negotiation, its not clear exactly what has changed or why as this is not documented. Hornbill SMTP implementation now pre-generates the secp256r1 key share specifically for Microsoft servers. This is the best generic solution because TLS connections will have better performance for the vast majority of SMTP servers currently supporting x25519 -------
  2. I have extended the function we are adding to include the ability for you to define the status(s) that you would consider completed, so when you use this suspend node you will be able to specify one or more status's that all linked requests should meet before resume, this means you can use it for other types of status-related suspensions. Will post an example once its ready for internal testing Gerry
  3. @SJEaton Just to confirm the above, the dev team has said this is most likely available in the next Service Manager update, so a small number of days from now most likely, depending what else is in the release pipeline. Gerry
  4. @SJEaton Yes, have discussed internally and will see what we can get done, it is not a particularly difficult thing to implement, we should have picked this up before letting you go down the custom field rabbit hole, we should have something in the coming days I expect, just need to get the work into the schedule Gerry
  5. @SJEaton Hi, I had a quick read through this thread, very interesting and creative approaches here to make this work, but there are a couple of things that might help expand knowledge for anyone trying to achieve this sort of thing. First thing to note is, the business process engine is highly asynchronous, inasmuch as, it does lots of things in parallel all at the same time, and, there is no guarantee over which things get done in which order when running two or processes side by side. This is also true when it comes to parallel processing, although each branch will process sequentially on the initial branch point, the order in which each branch path processes is not controllable. Once a processing branch hits a node that makes some other call, the BPM engine effectively suspends that node waiting for an event. For the most part, asynchronous processing is not intuitive, things do not happen in the order you might assume. The rather creative solution of using a custom field to "count" the number of child requests, basically incrementing a custom field numeric value each time a request is spawned and decrementing once the child has completed by waiting for that in a parallel processing path might seem logical but it will almost certainly not work most of the time. The problem is, changing the value of a custom field will involve a database read, and write, and the reason why this will not work reliably is that operation is not atomic, everything is asynchronous, so from a timing perspective, you will almost always see a miscount of some form. Trying to do this with a custom field is looking at the BPM as a programming environment, and its not really designed (intentionally) as a graphical scripting/programming environment. So I would recommend you do not try the above approach because it will, at the very best, be mostly unreliable. The only (and correct) way this can be achieved will be for us to implement a dedicated operation called "Wait for Linked Requests to Complete", this would present as an additional option to the below list of options, and wait for linked child requests to complete, suspending the BPM process. I hope that explains why what you are trying to do will not work. Thanks Gerry
  6. @Teresa Ward As @James Ainsworthmentioned above there is unfortunately no search in this view, its been on our radar a few times to address this but its never quite made it in priority above other stuff. Nothing immediate but we have noted your request. Gerry
  7. @Teresa Ward Search admin for Direct Outbound Email, find the message in there. Select the message and click on the little envelope icon next to the recipient and that will give you a full delivery log. Gerry
  8. @Stefania Tarantino No change, as its currently implemented as intended, as a LiveChat user you have access to the embedded media and other content. Only the transcript is transferred to the ticket for reasons explained above, there is no plans to change this. Gerry
  9. As we have previously communicated pre-pandemic, the the Service Portal (service.hornbill.com) has been replaced with the more functional and flexible Employee Portal. We ere on track before the pandemic hit to EoL this portal, but because of the disruption placed on everyone by the lockdowns, we made a call to leave the portal in place so as not to load our customers with addtional work. However, the time has now come to serve notice that we will be taking this portal out of action, at the latest, by the end of this year. Most Hornbill customers have already transitioned away from the Service portal, but there are just a small handful of customers still using the portal today. For those remaining customers using the service portal, our customer success team will make contact and provide you with assistance to help you make the transition to the Employee Portal. There are improvements and enhancements we would very much like to make to the Employee Portal, but because both the Employee and Service Portals share substantially the same customisation data, and because the service portal browser compatibility issues, we are currently limited to maintaining the lowest common denominator. Once the service portal is taken out of service we are able to make more progressive changes.
  10. @Gareth Watkins Try making a minor modification to the workflow, like move a node a little, and then save, then you should be able to activate it? If it works I will explain why Gerry
  11. @Berto2002 One other thing to consider, if you do need a large number of buttons, you can for now, just use Icons without the text as you have already done in a couple of instances, and make sure you use the tooltop funciton. Appreciate thats a work-around, we will get the issue with the hidden reference number sorted out though. Gerry
  12. @Berto2002 I am sure we can improve the way this works, we did not really think people would want to use this custom button scheme in the way people are now using it, we thought "just the odd couple of buttons here and there" but things, especially since we introduced auto-tasks, have moved on. Gerry
  13. A subscription for an application is counted as soon as any role from that application is granted to a use the application. In your case, by the user having the "Chat Session Agent' right granted, at the point that user logs in they will consume a chat agent licence. Once the subscription level for that application has been reached, further users with rights to that application would not be able to log in. So yes, the "Chat Session Agent" is a right that flags a user as a LiveChat user. Gerry
  14. @Damian Roberts This has been asked before, and actually thats a much bigger question/ask than you might imagine. Firstly, the same IC's (forms) are used in multiple places, by multiple actors (users, employees, customers, mobile catalog and certain other apps), the first question is, where would these forms get "saved to", and as soon as you ask that question, you then have to think about, ok, once a form is saved, where do I get back to that form, how do I see the list of forms I have previously saved, how do I look at a form to see what I previously saved (because I forgot that I saved it), and then, what are the limits of how much i can save, and, if as a user I save a form, and, by the time I get back to that form the system admin has re-configured/changed the form definition, and/or the service so it references the wrong BPM, or the service request has changed its configuration so the content that the form was previously initialised with has changed - the list goes on and on... Nothing insurmountable of course, anything is technically possible, but there is a hell of a lot to consider, for something that on the face of it sounds so simple. The issue is, IC is complex and it offers a lot, but because of this, simple stuff like this, that seems basic, is actually quite hard to do in a way that would work well and be a net-add to the product. So I just want to set expectations here that this is not something we can just throw in, and it is on our backlog already, but does not currently have priority. Thanks, Gerry
  15. @Damian Roberts We are currently in the final stages of development of a complete overhaul of the FAQs system, what you want to do can be achieved with these changes, these changes will be released in preview in the next 3-4 weeks I anticipate. Gerry
  16. @Adrian Simpkins No that does not look right, will get that sorted ASAP. Gerry
  17. @Berto2002 Its quite hard to be very precise here because things are quite malleable right now. We have a clear understanding of what we would "like" to do, and if we could just make the changes it would be done in a month or so. The problem is, this is a critical area of the product and we have to make sure that we make the update/migration as painless or as automated as possible, thats difficult with 100's of different configurations out there, so ultimately we have to find a good compromise. So my answers here, given we are talking about futures are subject to change! None the less... REPORTING: This is a bit different to what you are doing, this relates to the fact that each service can have a "configuration" for each request type, and likely most customers will have the same settings broadly for incident for example, for all services. The report we are talking about is to facilitate a customer-by-customer understanding of this configuration, because it will be important to understand this when considering working with the revised portfolio scheme. What you are doing in relation to a service report, I think will still stand so I would encourage you to continue with that, it might need a tweak later on, but thats about it I would think. TEMPLATE: We will be taking a different approach here which should remove the need for the template you talk of. In essence the request types will be removed from the portfolio, and they will be configurable independently. you will be able to create as many variations of "incident" configurations as an example, each can be given a name, you will then be able to associate a request type to your service request(s) as required. GLOBAL: The same applies as above. We would expect most customers would have a relatively small number of "request types" configured, which will by used by a much larger number of service request items within your portfolio, this will eliminate the need for common global settings. IMHO the whole configuration of requests and of the service portfolio will be vastly improved and simplified when we have completed this work, the challenge as I said is how we facilitate the migration of existing configurations to this new configuration layout, we still have quite a lot of work to do in that regard. Hope that gives you some guidance. Gerry
  18. @Damien Lynn Unfortunately the authorisation expirary scheme here does not consider any working time calendars when determining the expiry time. Only areas in that platform where you are explicitly able to set a working time calendar use working time calendars. Gerry
  19. @QEHNick To be honest with you, extending what @Victorhas said, they are really used for two different purposes. Timesheets is really a generally time/work management tool, where as the "time spent" on request updates is ideal for recording billable/chargeable time, primarily because you can use the combination of the time spent field to record the time to bill, and the diary update to record the justification for the charge. It may well be (as is often the case) you might spend quite a lot more time on something you change for, than the time you actually charge for, so time-spent-amount you want to charge the customer, time-recorded, the amount of time you spent on that work/activity. hopefully that rambling makes some sense. Gerry
  20. @QEHNick Lol great response. Yeah the API's are "documented" but I would not say its the easiest thing to figure out, basically there will be an application-published API relating to bulletins and you will use an API key generated against a specific user account to get the information, you will only see what that user has access to see, so the various rights will need to be set up accordingly. Once you have the data, you can then present that in whichever front end you would like. Thats a lot easier said than done, but I am sure one of the support team can point you in the right direction. I don't have the specifics to hand but here might be a good starting point. https://api.hornbill.com/apps/com.hornbill.servicemanager/ServiceBulletin?op=selfServiceGetServiceBulletin Gerry
  21. @QEHNick Unfortunately, the login page is not built with that in mind, the system is designed with quite a high degree of security applied to data, this is controlled primarily through the users session, in essence, when there is no session, there is very limited access to data on the instance. That aside, the login page is generic and is shared between a number of the UI's not all of which would this suggestion be relevant, you can think of the login page as a component of the authentication service, not of the product its self. What you are asking for is akin to asking that the login page presented by ADFS should display a list of public Office 365 calendar entries, before you log in, because the ADFS and the Office365 calendar are in essence two completely different systems, and on Hornbill you its pretty much the same, so its very unlikely we would make such a change. I hope this makes sense. I would re-interpret your request as "can we extend the login page to be a configurable intranet-style landing page where we can configure to display some non-login related public information", and would further extend this requirement to say, "can we add a feature to Service Manager to allow bulletins to be marked as 'public' and accessible to anyone who does not have a login to the instance", and after that, "can we have a widget option on the login page that can display the publicly accessible bulletins from Service Manager". Thats the breakdown of what you are asking for in terms of how the product is built today. We will make a note of the ask, but I cannot see this is something many people have asked for (you might be the first one actually). As an alternative, if your environment would benefit from having access to these bulletins without being logged into the employee portal, what about sending your users to an intranet page first, where you can display any information you want, including information from the Hornbill instance with some basic integration, then, if they need to log into Hornbill for further action a link can take them to the login page. Gerry
  22. @RIchard Horton Yes that is correct, the 2FA is only applied to direct login, not SSO login. If you need 2FA on SSO login, we would assume your identity provider would provide that capability. Gerry
  23. @Giuseppe Iannacone As of now, XML is the supported payload format for the API as it stands today. JSON works (as you know), but its not officially supported because its currently implemented as a transform of XML, and therefore subject to XML Schema validation. The problem is, while JSON is currently accepted as valid input, the order of the properties inside an object presented is important today, this is fine technically, but wrong when you consider JSON being generated from a JavaScript object using something like JSON.stringify() where there is no guarantees of what order the properties inside an object will be serialised. It is for this reason that JSON is not officially supported for API request messages. For API responses, JSON is fully supported without any issues. The problem you encountered above is because we have recently changed the structure of the JSON request message, to bring it in line with the JSON response structure, we made that decision because we assumed than no one was using JSON as we do not document it, or describe its use anywhere, and you will see if you look in the browser network tab and inspect the API calls from our front end, we always sent API requests in XML and get JSON responses. Now I have answered your question with a description because I am reluctant to just answer plain "Yes" to your question, and here is why. You will see in the platform roadmap that we are working on a fairly big change around the API's, the primary objective is to transition our API implementations become JSON-first, that is, we will continue to support XML as we use today, but we will equally support JSON, and we will be transitioning all of our code to use JSON instead of XML over time. This work is in flight and the above change was one of the changes resulting from this work. So with the above explanation, the answer I can give you is... XML is the officially supported request payload because its 100% compliant with the XML Schema specification, and is currently our primary supported scheme. JSON is 100% functional, and you can use it, but, when using JSON, you must be aware that the properties in the request object are required to be presented in a specific order because we effectively validate that input with the XML Schema validator. Over the coming weeks, we will be transitioning our services over to a new scheme where JSON will be correctly supported with its own validation scheme, and so really in a matter of a small number of months I will be saying "use JSON"... So my advice would be at this time, if you are using JSON for integration, and its working for you, continue to use JSON, but you will need to change the JSON request message payload to not include the methodCallResult top-level object as described above. You can take this as an official statement that the JSON request format now is not going to change again, so it will be very safe for you to continue to use JSON without having any future issues, but as of today the documentation does not reflect this. As I write this I realise this is not ideal, we are in flight with a change so we have to default our official position to what we know is not going to change, but in a few weeks time the answer will be very different and I want to make sure you take the best path. Hope this makes some sense. Gerry
  24. @Giuseppe Iannacone This error message comes about because when the API is being invoked you are specifying a different service endpoint in the POST URL than you have specified in the XMLMC payload. The following example would throw this exact message, because we are posting to the "automation" service (last part of the URL) but specifying the "session" service in the <methodCall service="session"> attribute Gerry POST /<your instance>/xmlmc/automation/ <methodCall service="session" method="userLogon"> <params> <userId>admin</userId> <password>cGFzc3dvcmQ=</password> </params> </methodCall>
  25. @samwoo Even the caching issue aside, thats another problem with the updating of an image, if you don't know where you have used the images, you may change things you do not expect! A spreadsheet does seem archaic though, Hornbill could probably benefit a lot from having a full blown Digital Asset Management (DAM) app, these things are quite common in marketing environments Tools like these: - https://www.canto.com/ https://www.bynder.com/en/ https://www.mediavalet.com/ and many others exist. These are the tools that allow you to organise large amounts of media, share them, create versions and be able to centrally update assets and so on. Possibly overkill when you just want to throw a few logo's into an email or portal, which is why we implemented the image library in the first place Gerry
×
×
  • Create New...