Jump to content

Gerry

Root Admin
  • Posts

    2,437
  • Joined

  • Last visited

  • Days Won

    172

Everything posted by Gerry

  1. @samwoo When I first read this I thought, that makes sense, could not be that hard. Then I thought about it a little more and I am not sure we could do this, at least not in a way that would allow WTCs to work for their intended purpose. The primary goal of a WTC is to allow the measurement of SLA's over working times (as opposed to time elapsed). When doing such calculations, we use WTC's for example, to take a point in time, and determine a future date/time based on the number of working seconds from the starting time point. Our use of WTC's rely on these predictions to be correct. So for example, if an SLA is working towards a time where it has calculated an escalation, taking into account (correctly) a bank holiday on Monday next week, then you suddenly add that day back in as an inclusion, then even though the escallation has already happened, all of a sudden according to the referenced WTC that escalation point has no longer been reached. Now of course, we happened to have used WTC's for other things, like calendars and appointments, which I think is where you are referring to, so probably this is something that would need to be accommodated outside of the WTC. Or we would have to have different types of WTC's, some for SLA related time calulcations and some for managing available working hours in calendars. Not sure really, I was just responding with an initial brain dump, I am not completely familiar with all the places WTC's are employed so we would need to give this requirement some consideration so we did not inadvertently cause other unexpected problems. Gerry
  2. @AlexOnTheHill Changing the team structure is easy, but the hard part is all the other parts of the system and applications that reference those teams. For example (and this is just one of many), when you set up a workflow assignment node to assign to a team, if you delete that team, your workflow will still reference that team, and, in that case things will break. Now scale that up to the numbers of workflows and historic data records and you will start to get a sense of the scale of the problem. The general advice would be - don't delete teams... unless you really understand the consequences. Some useful documents that may help: - https://docs.hornbill.com/esp-fundamentals/core-capabilities/organization-and-teams https://docs.hornbill.com/esp-fundamentals/best-practice/org-structure Gerry
  3. @will.good To exapnd on Victor's response above. It is certainly possible to change the h_user ID value for one or more users, BUT, its very complicated to do, and could break many things. The h_user_id field for legacy (therefore backwards compatibility) reasons is the primary unique identifier for a user account. This means, there are a large number of other areas that are going to be referencing that ID, so if you change the ID you will break those referrential links, these could be anything from simple ~FK references in other tables, references to the user(s) from within workflow and ICs and many other places that reference the primary identifier. So while its technically possible to do it, you will likely break lots of things doing it, and so we generally recommend that you dont make such a change. Many customers use opaque values for this field, often things like the users AD SID or other such thing for this exact reason. Hope that makes sense. Gerry
  4. Hi Martyn, Thanks for the reminder. I had to go back and look before responding. So this is the position so far. We did, a very long time ago make it possible for each application to "extend" the UI in the Auto Responder rule properties to add addtional information. You will see this in action when, for example, you select an action that requires you to select a specific template, in this case, the generic Reference field is refactored to become a drop-down list of templates for you to select from, that drop-down is the application operation specific custom property. In practice, its possible for each application team to show whatever addtional information they like here, which can include a theoretically endless number of fields for addtional information that is passed to the underlying operation, the one limitation here is, the Reference field stored in the database only has a capacity of 64 characters. So based on my last comment above about being two sets of things that need to happen, the first thing needed has already been done. This means that I need to pass over to the Service Manager team to do the second, I expect this might already have been on the backlog, I will need to check, and I will ask someone from the SM dev team to follow up here. All that being said, having had a quick review of the function as is today with a developer, I do want to make a couple of different improvements, which are simple, and now in the works... I am going to expand the capacity of the field that holds the addtional custom information from 64 to 2048 characters, that should be enough to allow for simple JSON structures, CSV's or any other text based multi-value property construct, giving the application team more flexibility as to what they can bring through into settings By default, if an application auto responder operation does not provide custom properties, we show the Reference field by default. In most cases there is no use for this, so we are going to hide that by default so its less confusing, each Auto Responder operation that needs the reference field can resurface it, which will make things easier for anyone configuring these. I am going to arrange for the UI of the Email Routing Rule properties to be slightly re-organised such that the Reference field is removed (as above) and, when there are custom config properties these will be shown in a separate form section, and I will encourage the app team(s) to include in their custom properties area an in-app help popup, like we have already provided for the expression. These changes are trivial, so will get them done over the next couple of weeks, and I will ask the SM team to look at tihs requirement with a view to making use of the above. Hope thats helpful Gerry
  5. @samwoo Appreciate that can be frustrating, I would strongly reccommend that if you are doing long Workflow editing sessions, that you enable this option..... Gerry
  6. @Martyn Houghton Thanks for the clarification. So by the sounds of it a Requests::lookupRequestIdByExternalId() API would get you what you need? It would look for the request where "h_external_ref_number" matches the external reference, and would return the Request ID? Gerry
  7. @Jim @yelyah.nodrog I just wanted to make a comment here. Any database table with the h_sys_* prefix is subject to change at any time. That means you cannot rely on any report you run against those tables to work moving forwards. I appreciate as of today the Hornbill platform through system reporting capabilities allows you to peer into pretty much any part of the database, so you are of course free to do this, its your instance and your data. However, should these tables change as things evolve, and should the platform change in the future to better secure the system for multi-tenant use, you may will find this sort of report will no longer work. With that in mind then, the better question for use to ask is, what are you trying to get/filter by, perhaps if we understand the requirement, we would stand a chance of providing you that facility instead, which would be more convenient for you, and far more supportable by us? Gerry
  8. @Martyn Houghton Can I ask, what API's are you currently using to achieve that? The requirement is perfectly reasonable, however, for us to implement that in a way that is supportable that would require some speciific behaviour. For example, many of our customers now have +millions of request records, so, if your external reference number was just stored into a custom field, where there is no index, then it would be a very poor performing API call as it would have to do a full table scan to find a record. So if this is a requirement we should provide you with the ability to do this via an API, but, in this case, we would probably want to be adding a dedicated field (if one does not already exist) and ensure that its correctly indexed in order to support such an API. This is really the point. The general query API's will let you do pretty much anything, but given our database schema, and general query APIs (like the search API) will and need to change as we improve things, we cannot be continuously locked into API's of the day, as we have been when exposing these. I would be interested to know which API's you are currently using to achieve this. It certainly sounds like something we should be able to correctly accommodate for. Gerry
  9. @Adrian Simpkins Cal I clarify, when you say connection node, do you mean the Service Manager action to add a connection to a request? Gerry
  10. @Berto2002 We generally do not stop for Christmas in terms of ongoing software development/changes/pushes. In practice things do get naturally quieter but we have no official freeze window. Gerry
  11. Just on the point @Steve Giller mentioned, PHP has not been used in our code base for more than 8 years. Our API's are pretty generic though, so they certainly usable from PHP if requiired. Because our API stack is focused on JSON now, especially for customer-facing API's there is no longer any need for us to provide platform-specific API libraries, just use the standerd RPC/REST style API's, they are simple and easy to use from pretty much every language that works with modern web stuff. Our docs to contain examples in numerous languages. Gerry
  12. @samwoo Good question Sam. The basic answer is no, one does not automatically get you the other, but, if the function is present and working as required, for example, lets say there is an automation presented in the BPM for performing an action, and there is a reason why that would be a good idea to also have that as an API then that may make it easier to implement, especially if the logic behind the function is complex. However, generally speaking, the request for an addtional function in the BPM and an addtional API are really treated as two seperate requirements, in both bases we are asking why its needed, how relevant is it in that domain (BPM, API etc...) and all that stuff. Historically we have been very bad at just adding stuff, even if it only aids one customer, and in many cases now, as the systems scale and size has grown this approach has come back to bite us because we needed to change things that were incompatible, or we had forgotten about that variation, or many complaints of how complicated the system is, too many API's, too many options, not enough documentation, examples, guidance etc... All fair criticisms and something we are very keen to do something about. The changes to the API's, documentation, getting the Academy up and running and many other initiatives are all components of allowing our platform to continue to gorw, while at the same time making things simpler, more consumable, better documented etc... So when we look at requirements from customers, from the sales field and in the compatative landscape we have a much more inquisative mind being applied to thore requests. Just knowing what someone would like is no longer enough to get stuff added to the product, we need to understand not only what, but why its needed, how does it fit into the bigger picture, and we need to consider if there are better/alternative ways to achieve the same thing.... etc... Hopefully that sheds a little light and answers your question. Gerry
  13. @samwoo it would certainly be helpful for us to understand generally what customers are using the API's for, so yes, please ask. We are keen to understand not only what API's you use but what you use them for, that might give us better insights into how we might better produce useful customer-facing API's Gerry
  14. @samwoo You can just use these forums. Just put into the right forum so right dev team get it. For the most part you are either going to be requesting a platform or Service Manager API. For the shared user example above that will be a Service Manager feature request. You can generally determine that based on what Doc you would expect the API to be documented in. For now, all old API's will continue to work, the caviat here is, at some point they will stop working because of the API infrastrcuture changes we are making. At some point in the future the API endpoints will also be changing - we will communicate this early with plenty of notice so no need to worry about it now. In essence the current endpoints will be used excluseivly for our front-end applications. There will be a newly created API endpoint which is dedicated for customer consumption, when this happens, two major changes will kick in. * Only the API's documented on docs.hornbill.com will be accessible on that new endpoint * The existing endpoint will no longer work with API keys, only interactive sessions will work, and the only way to create an interactive session will be from our apps, and not accessible by API users * The new API endpoint will only work with API keys. * The old API documentation is going to be taken out of service/will no longer be available (thats happening quite soon) There is still some months to go here, so if you are using some of the old API's they will contine to operate, but, they may change or disappear without any prior notice as this development effort continues. Gerry
  15. @Osman I got what you are asking for. The problem is, we are having to cater for a large number of scenarios, there are various ways of logging into the system, SSO is one, SSO with more than on iDP is another, there is passport, direct login, support passcode login etc. We do not detect if there is already a login, thats not possible because the cookie(s) that exist are rotted to a different domain. In order to just pass thourgh, what we have to do is intiate the SSO cycle, which involves redirecting to the IDP and the IDP then redirects the browser back to our server. Catching all the variations of things that can go wrong here is complicated. When implementing these sorts of things we have to do things that work for everyone, and so the design decision was taking to first have a landing page, so we have a way of presenting the choices needed. I acknowledge its technically possible to just do SSO directly, but because our login/landing page for logging into Hornbill has to cater for both users of the system as well as basic users (your end users) of the system, when we have now works for all cases. The special URL suggestion "may" be possible, but, then we would place certain handcuffs on ourselves for features we may want to add in the future, so at this time, this is not something we will be adding to the product. Gerry
  16. No problem all, our apologies for that, I am not exactly sure what caused the problem, I expect we missed in testing. Please do keep in mind though that this API is not currently flagged as a customer-facing API and from the looks of the API signature the API does not meet our own guidelines to meet the requirements of being a customer-facing API becuase there are undocumented JSON strings/structures emitted from this API, these JSON structures are generally not ridgid and are subject to change at any time, which means this API can also change at any time. Gerry
  17. @ChrisDee I believe this should already been fixed, I am sure I seen a hotfix go out for this. Is it still not working for you? Gerry
  18. @Osman Ahh sorry, I had misread your original post. You are asking if you can simply bypass the login page all together. This was discussed extensively on the forums when this was first implemented, the short answer is no, there is no way to bypass that login page. This is precisely because there is more than one option, and, should you need to login with a different method, you need a way of getting to those. What we have implemented now is the best compromise to meet all of the various login/authentication requirements. Gerry
  19. @Osman You can disable this option in your CSS profile, which will then not ask you to authenticate if you are already authenticated on your IDP Gerry
  20. @Kelvin Ahh if you are trying to do this in the Workflow thats a different thing, I thought you were talking about the email routing rules. So in this case you only have Regex Substring as an option, I am not familiar enough with RegEx to off guidance on what expression might be used to get what you want. You may want to raise an enhancement request to have the SM team provide something equivelent to the TOKEN() function I mentioned above in this String Utilities list Gerry
  21. @Kelvin You can also use TOKEN(), please see the following document for full details of whats possible Email Routing Rules: https://docs.hornbill.com/esp-config/email/routing-rules The Expression Engine: https://docs.hornbill.com/esp-fundamentals/reference-guides/express-logic Gerry
  22. @Adam Toms We are definitely looking to build out a certification program, thats in the long term roadmap. We would like to see our customers gain a higher level of procifiency and experience with our product, which continues to evolve with ever more comprehensive capabilities, we recognise that customers are not even aware of the scope of our product capabilities, something we believe the academy, the new documentation and otehr customer-enablement initiatives we are starting to build out. We are currently focused on content, and we have quite a ways to go with that, but, we are investing this this initiative, and have a dedicated team of people behind the academy focused on building this out. We appreciate the feedback, please make use of the academy and all feedback welcome. There will be a certification program in the future, but thats very unlikely to appear in 2024, we need a lot more content development first, which is where our current focus is. Gerry
  23. @GJ06 No unfortunately not, that is one of the problems we are trying to solve. In all honesty we have no idea which customers are using which API's, we have done out best to determine this from the logs and other sources of information we have, but its not so simple to determine, this is because the API catalog customer have been using, is the same APi catalog as we are using internally. This change is about making those two seperate things. Technically, the API's currently. although not documented anymore, will still work today unimpeded, but the basic aim is, if the API is not earmarked for customer use, its not appearing in the new documentation, and the legacy dev documentation that was previously published to api.hornbill.com has been removed. We are not proactively seeking to break things, but as I said, we have no absolute way of knowing exavtly what API's customers are using, we ourselves are trying to make that determination. It is critical we make this change if we are to provide a stable set of APIs for customers, that are of high quality, well documented, and properly supported from an availability point of view. The above problem reported by both @ChrisDee and @CraigP is a perfect example of this. The API ( getUserProfileAssets ) appears to be broken, almost certainly because something changed and this API did not have test/validation in our test automation. In our current approach all API's are equal, what we are putting in place is, specific API's that are customer facing will be treated with a much higher priority, will have more testing applied and will be objectively more stable as a result. @ChrisDee @CraigP in relation to the getUserProfileAssets API, this seems from the error message to be a defect that we have inadvertently introduced, this specific problem is not as a result of the API architecture change, there seems to be a simple software defect. I will ensure a defect is raised against this and it gets fixed ASAP, this ought to be something we can simply hotfix, in relatively short order I would expect. The question as to weather this API should be promoted to a customer-facing API is still open, for now it will continue to work. On the face of it, looking at the API spec it does not pass our Quality Requirements in terms of being properly documented because we are exposing an undocumented JSON structure via an xs:string property type. As I mentioned above, its not our goal to mess things up for customers, so we need to provide a more definitive answer to this for you. In the mean time, we will fix the API so it will once again work as it was previously, and I will raise this API with the dev team who can make a further determination as to the status of this API. @ChrisDee Thanks for providing the explanation around the Device Audit facility, I am not sure I understand what that is in detail, but I will pass onto the Service Manager product team to look into this, to see if this is something that we should be including in Self-Service rather than you having to roll your own, or, if not, we should be providing you with the requsite APIs you need to DIY this as you have done. Thanks Gerry
  24. @RichardD The official position is, if the API is not published on https://docs.hornbill.com/ then its not available for use, and, more importantly is not stable, meaning its subject to change at any time, without prior notice, or any steps to take care of compatibility issues with existing use. Right that all sounds very harsh... but the truth is, we are in transition when it comes to APi's and the Hornbill. Our original strategy was to document and publish *ALL* API's, and over the last 10 years that list has grown to 1000's of them. Not only do we create (reasonably well documented) API's at the platform level, we also create many API's at the application level, and over the last 10 years that has led to a number of problems * Many API's that are published are not generally useful to customers, creating a "cannot see the wood for the trees" problem. * The focus of our API creation has been to support our own development efforts around front ends, administration and integrations, customer access to API's has typically made use of a very small subset of API's. * Documentation ranges from acceptable to very poor. Many API's used for supporting front end UI functionality have ended up using strings to store/load complex JSON structures that are basically not documentable. The scale of our API's along with a need for *much much better* customer facing documentation has led us to a point where we need to create a different set of API's for customers. A smaller, much more well defined, stable and importantly well documented, high quality API's, and thats been part of what we have been working towards. To that end, what is now documented on docs.hornbill.com is our current take on what API's we feel need to be customer facing. All other API's that were there before are *currently* still available to be called, they are just not documented. However, it is important to note that, ultimately if the API is not documented in https://docs.hornbill.com/ then its probably a good idea to assume that you are using an unsupported API, which, is very likely to change, break or even disappear completely from the system, without prior notice from us, and without any contignecy for backwards compatibility or alternatives. Which will mean it will simply stop working, and if that happens, our support team will look at the documentation, and if its not available, there will be no immediate fix. That is not going to happen over night, but there is a strong possibility it will happen in the future. We are in the somewhat bizarre position today that, if we need to change, or depreciate some API's in support of our work in our own product, we cannot because we had no idea who was using these API's, this is crazy given the vast majority of API's are only ever used by us, so, we should be free to change them in order to improve the product. So there is a very strong need for use to be able to both commit to a stable set of API's for our customers to use, and API's that we can use/change/evolve as we see fit, knowing we will not be impacting our customers use of API's If the specific API you are talking about (getUserProfileAssets) is something you feel should be customer facing, then there is no problem proposing that, we would need to run that through our internal team/process to make sure there is no other reasons why it should not be added as a customer API, and on the back of that, I will raise that question internally for you. Can you tell me what you use this particular API for? As part of our overall documentation effort at Hornbill, the aim is to drive up the quality of our documentation considerably, and the API documentation is a big part of that effort. Internally we will be publishing a "Customer Facing API Quality Standard" which will mandate our developers meet a certain standard before making API's generally available to customers. By standard I mean, using correct data types, documenting each input/out parameter properly, setting appropriate security controls, providing an examples for use of the API etc... and of course, for the API's that are documented/published, we have quite a lot of retrospective work to do in this regard. The end game is, our customer accessible API's will be completely independent of our internal API's, we are working on significant changes and architectural improvements to our API infrastructure that will help facilitate this transition. Hope that helps Gerry
  25. @Alisha There is no limits, you can really have as many routing rules as you want. The processing uses in-memory evaluation of expressions using our ExpressLogic engine, it is very very fast, so the expressions or numbers of them so unless you start adding 100's of 1000's of rules you are good to go. Please keep in mind though that the more of these you add, the more of a headache it will be to troubleshoot, although we have recently added something that will be released very soon that will help a lot with that. See the attached, each email message delivered into your Hornbill instance will now include a log detailing the email delivery/import process. Of particular interest will be the part of the log that shown in the green box, which shows you the inbound email routing rule processing, and if any of the rules matched. This new log will be out in the next week or two, so please watch this space. Gerry
×
×
  • Create New...