Skip to content

feat: Introducing vr_aggregator with VRDirectCB and VRFRMODE #477

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 51 commits into
base: master
Choose a base branch
from

Conversation

sivasathyaseeelan
Copy link

@sivasathyaseeelan sivasathyaseeelan commented Feb 15, 2025

using DiffEqBase, JumpProcesses, OrdinaryDiffEq, StochasticDiffEq, Test, Plots
using Random, LinearSolve
using StableRNGs
rng = StableRNG(12345)

rate = (u, p, t) -> u[1]
affect! = (integrator) -> (integrator.u[1] = integrator.u[1] / 2)
jump = VariableRateJump(rate, affect!, interp_points = 1000)
jump2 = deepcopy(jump)

f = function (du, u, p, t)
    du[1] = u[1]
end

prob = ODEProblem(f, [0.2], (0.0, 10.0))
jump_prob = JumpProblem(prob, Direct(), jump, jump2; rng = rng)

integrator = init(jump_prob, Tsit5())

sol = solve(jump_prob, Tsit5())
plot(sol)

Screenshot from 2025-03-11 04-46-32

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
@sivasathyaseeelan sivasathyaseeelan marked this pull request as draft February 15, 2025 20:38
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
@sivasathyaseeelan sivasathyaseeelan marked this pull request as ready for review March 10, 2025 23:15
@sivasathyaseeelan sivasathyaseeelan changed the title refactor: using an integrating callback from DiffEqCallbacks.jl which drops usage of ExtendedJumpArrays refactor: drops usage of ExtendedJumpArrays Mar 10, 2025
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
Signed-off-by: sivasathyaseeelan <dnsiva.sathyaseelan.chy21@iitbhu.ac.in>
@sivasathyaseeelan sivasathyaseeelan changed the title feat: Introducing variablerate_aggregator feat: Introducing vr_aggregator with VRDirectCB and VRFRMODE Mar 24, 2025
@isaacsas
Copy link
Member

@sivasathyaseeelan I'm traveling the next few days, but will try to take a look Sunday if I have a chance, or next Tuesday.

@@ -0,0 +1,140 @@
using DiffEqBase, JumpProcesses, OrdinaryDiffEq, StochasticDiffEq
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be moved to SciMLBenchmarks.jl

Comment on lines +52 to +53
rate_funcs = [jump.rate for jump in vjumps]
affect_funcs = [jump.affect! for jump in vjumps]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should get functionwrapped if they aren't tuples?

cur_rates::Vector{T}

function VRDirectCBEventCache(jumps::JumpSet; rng = DEFAULT_RNG)
T = Float64 # Could infer from jumps or t later
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should get an issue.

end
rate_increment *= (dt / 2)

cache.cumulative_rate += rate_increment
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this quantity? Because it's not correctly the cumulative rate: it doesn't take into account condition rejections. But it's also not used.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should probably just be deleted?

rate_increment *= (dt / 2)

cache.cumulative_rate += rate_increment
cache.total_rate_cache = total_variable_rate(vjumps, u, p, t, cache.cur_rates)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this evaluated here instead of in the affect!?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this there, it's just unnecessary on most steps.

# Condition functor defined directly on the cache
function (cache::VRDirectCBEventCache)(u, t, integrator)
if integrator.t != cache.current_time
cache.prev_threshold = cache.current_threshold
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has a bug because cache.current_threshold is never set, so you'll never actually accumulate.

Comment on lines +113 to +114
cache.prev_threshold = cache.current_threshold
cache.current_threshold = randexp(rng)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
cache.prev_threshold = cache.current_threshold
cache.current_threshold = randexp(rng)
cache.current_threshold = randexp(rng)
cache.prev_threshold = cache.current_threshold

Comment on lines +85 to +86

return cache.prev_threshold - rate_increment
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return cache.prev_threshold - rate_increment
cache.current_threshold = cache.prev_threshold - rate_increment # cache increment if not zeroed in this round
return cache.current_threshold

@ChrisRackauckas
Copy link
Member

It's impossible for this to generally work without actually updating the current threshold https://github.com/SciML/JumpProcesses.jl/pull/477/files#r2052157167, so make a test case that stresses that and fails and then after bringing in that change it should pass. You should be able to see this if you do something like set dtmax = 0.0001, as you will get zero jumps in that case since the rates never accumulate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants