Martyn Houghton Posted September 19, 2017 Posted September 19, 2017 At the moment you can control the time of day the Platform updates are applied to your Hornbill Instance via the parameter autoUpdate.maintenanceWindow in settings. However this is only the time of day and does not give the option to determine which day(s) of the week at all. Though most platform updates do not take very long at all, i.e. 9 seconds, some of the more complex and larger ones have taken a considerable amount of time to apply depending on the data volume, i.e. over an hour,. As we move towards 24/7 operation we would also want to be able to control the day(s) of the week when the maintenance window is available as well, to better schedule the suspension of the system, to have the smallest impact. Cheers Martyn
Gerry Posted September 19, 2017 Posted September 19, 2017 Hi @Martyn Houghton Thanks for the question. Firstly, I will say that its very unusual that an update would take anything like the amount of time reported above, its more typically 5-10 seconds, sessions are persistent and retry logic ensures users do not experience unavailability assuming the update time is below 15 seconds. I will need to ask our team to investigate that to determine why this happened in this instance. If we make database schema changes that cause an index re-build and if you have very large data sets then its possible for it to take some time but we generally are concerned with things that take more than a minute or two which is why I am surprised to see that. I expect something went wrong in this instance, we will need to investigate and get back to you. In terms of the question relating to the maintenance window. The problem with providing a day mask for this is we could end up causing problems. Updates are done incrementally and forwards only, and our publishing process handles this. If we provided this capability and you only allowed updates on Sunday at 2AM and we had number of sequential updates, there might be an incremental dependency that gets skipped causing an update failure. The system and our processes that handle and schedule our updates are tuned/configured and organised around the premise that instanced can be updated once a day, by allowing customers to gate updates over days would stall our ability to push updates at the rate we currently do so I would not be keen to change that. Now, we have in the works some fairly substantial under the hood changes that will change this behaviour, specifically for updates that do not include database schema changes there will be zero service unavailability, we will be getting back those 5-10 second windows you currently have. This is still work in progress so watch this space. Hopefully that makes sense. Gerry
Martyn Houghton Posted September 19, 2017 Author Posted September 19, 2017 @Gerry I have sent you details by email of which build updates and their duration on our instance, so hopefully that will help with the work you are doing under the hood now. If the build updates can be contained to short periods you are aiming for, then we can cope from a business perspective having the potential applications of updates applying during a quiet period on 24/7 operation is something we can live with. Cheers Martyn
Gerry Posted September 19, 2017 Posted September 19, 2017 Thanks @Martyn Houghton, Of those you sent, four took longer than 9 seconds, we will investigate. We are pretty sure that one of the updates relates to a schema change to a table thats hold business process state, in your instance this table is around 10Gb in size! The majority of the data in this table is static so I expect that we will have to implement some form of object archiving to reduce the size of this table in order to prevent this type of update delay. As for the other three I am waiting to hear what we find in the logs, once I know i will let you know. Gerry
Gerry Posted September 20, 2017 Posted September 20, 2017 @Martyn Houghton these were all related to schema changes on the table that holds BPM state (that table on your instance is around 10Gb in size), it takes time on large tables. The good news is we don't often change the schema for this table so I would expect you will not see this very frequently at all. None the less this is not efficient so we are looking into ways of changing the way we hold BPM state to overcome this problem. It will take some planning so wont be an immediate change but I will keep you informed as we make progress. Gerry
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now