cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Have a PTC product question you need answered fast? Chances are someone has asked it before. Learn about the community search. X

Value Stream Processing Oddity

alexe1
1-Newbie

Value Stream Processing Oddity

While running some tests recently I ran into a little oddity in logged properties and value streams. Turns out if you change a property extremely quickly (ex: a "for" loop that just changes that property x number of times), the Value Stream Subsystem does not actually appear to write all of those changes to the Value Stream. I ran a 500ct "for" loop that changed a string property each iteration, and I only got 2 entries back in QueryPropertyHistory. If I pause for 2ms each iteration then it records all 500.

On its own this is odd enough, but whats worse is that the Subsystem Monitor says that it queued and wrote all 500. Where is the disconnect here? It appears to me that its catching all 500 property property writes/changes, but its failing to record to the Value Stream correctly, but you would never know that looking at the Monitor.

Anybody have any experience with this? Anyone messed with the config of the Value Stream Subsystem to optimize for this type of thing?  I don't have a current use case where this would be an issue, but certainly one exists.

Additionally, but semi-unrelated, the pause() snippet gives a goofy error when you input 1ms -  pause(1) - Wrapped java.lang.Exception: Invalid Pause Value : For input string: "1.0" Cause: Invalid Pause Value : For input string: "1.0"  ---- pause(2) and above works just fine... its just a curious limitation with a error message that doesn't explain why

3 REPLIES 3

Hi Alex,

If you can reproduce it easily, send it a test case to Support and they will check. I had other kind of problems with Value Streams, like the first time you update a property it starts indexing and doesn't stores all the data, you case may be related to indexing problem and with different entries having the same hash or similar - as the timestamp it's exactly the same, you only have milliseconds precision, and you are recording faster than that - , but maybe it's a totally different case.

And on which version you are testing it? i had this kind of problems on 6.5.<=6 ( more or less ), now I'm on 6.5.13 and moving soon to 7.2 ( when it's released ).

About pause() yes it's a tricky one, neither you can pause more than 1 minute ( maximum = 59999 ), and you should pass an INTEGER/LONG value otherwise you will get a similar error. Take it easy with pause() as it blocks any transaction your service started and you can end up with lots of blocking conditions.

Carles

Interesting... Thanks Carles

Its definitely really easy to reproduce, but I might wait until I upgrade my version to notify Support. I noticed this on 6.0.8 , and usually Support's first response is "Upgrade and see if its still there..." which is fair in most cases. I think you might be right regarding the millisecond precision as the root, but I'd be curious to hear it first hand from ThWx.

Interesting note on pause() too, I had never tried more than a minute, but good point about the blocking. I almost never use it, except if I want some spaced out timestamps during development and I don't feel like faking them out. I still don't fully understand why pause(1) or pause(1.0) gives that error. If you try more than a minute, it gives you an out of range error, where it defines the range as 1-60000. It doesn't particularly matter either way, but I feel like the errors for <2 and >59999 should be the same. Who knows, and at this point I guess who cares, both are invalid inputs. Such a minor point.

I'll drop back here if I learn anything else along the way or have other Value Stream troubles.

wposner-2
12-Amethyst
(To:alexe1)

What persistence type are you using and what version of TWX?  There are settings that affects how often TWX writes stream entries.  We found this when we were writing data every few milliseconds from a remote device.  We were expecting to see data being saved on every refresh, but that was not the case.  After a bunch of digging around we found a setting that, once changed, caused TWX to write the stream values much more frequently.  If you're using Neo4J, take a look at the following screenshot:

Screen Shot 2016-06-13 at 10.37.28 AM.png

The "Max wait Time before flushing..." are the props you need to change.  Here, we've set the values to 1000 milliseconds. If you're using Postgres, the same props are in your platform-settings.json file:

         "StreamProcessorSettings":{ 

            "maximumBlockSize":2500,

            "maximumQueueSize":250000,

            "maximumWaitTime":1000,

            "numberOfProcessingThreads":5,

            "scanRate":5,

            "sizeThreshold":1000

         },

         "ValueStreamProcessorSettings":{ 

            "maximumBlockSize":2500,

            "maximumQueueSize":500000,

            "maximumWaitTime":1000,

            "numberOfProcessingThreads":5,

            "scanRate":5,

            "sizeThreshold":1000

         }

Top Tags