Project

General

Profile

Actions

Feature #20425

closed

Optimize forwarding callers and callees

Added by tenderlovemaking (Aaron Patterson) 7 months ago. Updated 5 months ago.

Status:
Closed
Assignee:
-
Target version:
-
[ruby-core:117498]

Description

This PR optimizes forwarding callers and callees. It only optimizes methods that only take ... as their parameter, and then pass ... to other calls.

Calls it optimizes look like this:

def bar(a) = a
def foo(...) = bar(...) # optimized
foo(123)
def bar(a) = a
def foo(...) = bar(1, 2, ...) # optimized
foo(123)
def bar(*a) = a

def foo(...)
  list = [1, 2]
  bar(*list, ...) # optimized
end
foo(123)

All variants of the above but using super are also optimized, including a bare super like this:

def foo(...)
  super
end

This patch eliminates intermediate allocations made when calling methods that accept ....
We can observe allocation elimination like this:

def m
  x = GC.stat(:total_allocated_objects)
  yield
  GC.stat(:total_allocated_objects) - x
end

def bar(a) = a
def foo(...) = bar(...)

def test
  m { foo(123) }
end

test
p test # allocates 1 object on master, but 0 objects with this patch
def bar(a, b:) = a + b
def foo(...) = bar(...)

def test
  m { foo(1, b: 2) }
end

test
p test # allocates 2 objects on master, but 0 objects with this patch

How does it work?

This patch works by using a dynamic stack size when passing forwarded parameters to callees.
The caller's info object (known as the "CI") contains the stack size of the
parameters, so we pass the CI object itself as a parameter to the callee.
When forwarding parameters, the forwarding ISeq uses the caller's CI to determine how much stack to copy, then copies the caller's stack before calling the callee.
The CI at the forwarded call site is adjusted using information from the caller's CI.

I think this description is kind of confusing, so let's walk through an example with code.

def delegatee(a, b) = a + b

def delegator(...)
  delegatee(...)  # CI2 (FORWARDING)
end

def caller
  delegator(1, 2) # CI1 (argc: 2)
end

Before we call the delegator method, the stack looks like this:

Executing Line | Code                                  | Stack
---------------+---------------------------------------+--------
              1| def delegatee(a, b) = a + b           | self
              2|                                       | 1
              3| def delegator(...)                    | 2
              4|   #                                   |
              5|   delegatee(...)  # CI2 (FORWARDING)  |
              6| end                                   |
              7|                                       |
              8| def caller                            |
          ->  9|   delegator(1, 2) # CI1 (argc: 2)     |
             10| end                                   |

The ISeq for delegator is tagged as "forwardable", so when caller calls in
to delegator, it writes CI1 on to the stack as a local variable for the
delegator method. The delegator method has a special local called ...
that holds the caller's CI object.

Here is the ISeq disasm fo delegator:

== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself                                                          (   1)[LiCa]
0001 getlocal_WC_0                          "..."@0
0003 send                                   <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave                                  [Re]

The local called ... will contain the caller's CI: CI1.

Here is the stack when we enter delegator:

Executing Line | Code                                  | Stack
---------------+---------------------------------------+--------
              1| def delegatee(a, b) = a + b           | self
              2|                                       | 1
              3| def delegator(...)                    | 2
           -> 4|   #                                   | CI1 (argc: 2)
              5|   delegatee(...)  # CI2 (FORWARDING)  | cref_or_me
              6| end                                   | specval
              7|                                       | type
              8| def caller                            |
              9|   delegator(1, 2) # CI1 (argc: 2)     |
             10| end                                   |

The CI at delegatee on line 5 is tagged as "FORWARDING", so it knows to
memcopy the caller's stack before calling delegatee. In this case, it will
memcopy self, 1, and 2 to the stack before calling delegatee. It knows how much
memory to copy from the caller because CI1 contains stack size information
(argc: 2).

Before executing the send instruction, we push ... on the stack. The
send instruction pops ..., and because it is tagged with FORWARDING, it
knows to memcopy (using the information in the CI it just popped):

== disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)>
local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1])
[ 1] "..."@0
0000 putself                                                          (   1)[LiCa]
0001 getlocal_WC_0                          "..."@0
0003 send                                   <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil
0006 leave                                  [Re]

Instruction 001 puts the caller's CI on the stack. send is tagged with
FORWARDING, so it reads the CI and copies the callers stack to this stack:

Executing Line | Code                                  | Stack
---------------+---------------------------------------+--------
              1| def delegatee(a, b) = a + b           | self
              2|                                       | 1
              3| def delegator(...)                    | 2
              4|   #                                   | CI1 (argc: 2)
           -> 5|   delegatee(...)  # CI2 (FORWARDING)  | cref_or_me
              6| end                                   | specval
              7|                                       | type
              8| def caller                            | self
              9|   delegator(1, 2) # CI1 (argc: 2)     | 1
             10| end                                   | 2

The "FORWARDING" call site combines information from CI1 with CI2 in order
to support passing other values in addition to the ... value, as well as
perfectly forward splat args, kwargs, etc.

Since we're able to copy the stack from caller in to delegator's stack, we
can avoid allocating objects.

Why?

I want to do this to eliminate object allocations for delegate methods.
My long term goal is to implement Class#new in Ruby and it uses ....

I was able to implement Class#new in Ruby
here.
If we adopt the technique in this patch, then we can optimize allocating
objects that take keyword parameters for initialize.

For example, this code will allocate 2 objects: one for SomeObject, and one
for the kwargs:

SomeObject.new(foo: 1)

If we combine this technique, plus implement Class#new in Ruby, then we can
reduce allocations for this common operation.

Updated by tenderlovemaking (Aaron Patterson) 7 months ago

We have tests that measure allocations. I had to update the tests because of the reduction in allocations, so you can see the impact of this patch here

Updated by ko1 (Koichi Sasada) 7 months ago · Edited

I think it is good idea but I'm concerned that the code on github may affect normal cases because of additional code path.

Updated by ko1 (Koichi Sasada) 7 months ago

Instead of introducing new rules and complex code, I think providing lightweight container than Array/Hash is better.

Consider def f(...) = g(...):

  • Introducing argument object (like JS) by imemo and pass it as a unique parameter for a method def f(...)
    • of course not visible objects from Ruby users.
    • argobj has memory buffer (argbuff) and copy all arguments (and CI) to argbuff.
    • calling another method with g(...) expand all arguments from argbuff.
  • argbuff memory management
    • argbuff is allocated from ractor local argbuff heap (4KB for example) by bump allocation.
    • if argbuff heap is not enough, all existing argobj copies their argubuff to malloc managed memory (evacuation)

It takes (1) argobj object allocation and (2) 2 copies (1 for argbuff and 1 for calling g(), the original proposal only copies once at calling g()), so not ultimate lightweight, but simple.

For new case, I think providing new VM_METHOD_TYPE_OPTIMIZED logic is better.

Updated by tenderlovemaking (Aaron Patterson) 7 months ago

ko1 (Koichi Sasada) wrote in #note-2:

I think it is good idea but I'm concerned that the code on github may affect normal cases because of additional code path.

It only impacts the send instruction and invokesuper instruction. If there is a speed impact, I think we could emit "forwarding_send" or "forwarding_super" instructions and eliminate the code path. Also, as you noted on the PR most iseqs are "simple" and we already have to check that. Forwardable iseqs are just not considered "simple" 😆

ko1 (Koichi Sasada) wrote in #note-3:

Instead of introducing new rules and complex code, I think providing lightweight container than Array/Hash is better.

Consider def f(...) = g(...):

  • Introducing argument object (like JS) by imemo and pass it as a unique parameter for a method def f(...)
    • of course not visible objects from Ruby users.
    • argobj has memory buffer (argbuff) and copy all arguments (and CI) to argbuff.
    • calling another method with g(...) expand all arguments from argbuff.
  • argbuff memory management
    • argbuff is allocated from ractor local argbuff heap (4KB for example) by bump allocation.
    • if argbuff heap is not enough, all existing argobj copies their argubuff to malloc managed memory (evacuation)

It takes (1) argobj object allocation and (2) 2 copies (1 for argbuff and 1 for calling g(), the original proposal only copies once at calling g()), so not ultimate lightweight, but simple.

I'm not sure how this is more simple than the design I proposed. ... callees will need to know to copy the stack, which requires a flag (like my patch). Call sites will need to know to expand ..., so they will need a flag (like my patch). It still requires special book keeping of the SP when expanding ....

Also it sounds like it requires more logic (introduction of another memory manager) to manage the argbuf buffer, and maybe logic when the stack escapes (for example def f(...) = lambda { g(...) }).

For new case, I think providing new VM_METHOD_TYPE_OPTIMIZED logic is better.

I would like to test this because I am not sure. Optimizing ... lets us implement Class#new as a (mostly regular) Ruby method, which I think is very advantageous for YJIT. Also adding a new iseq type means adding more complexity that we don't need (if this patch is accepted).

Updated by tenderlovemaking (Aaron Patterson) 7 months ago

tenderlovemaking (Aaron Patterson) wrote in #note-4:

ko1 (Koichi Sasada) wrote in #note-2:

I think it is good idea but I'm concerned that the code on github may affect normal cases because of additional code path.

It only impacts the send instruction and invokesuper instruction. If there is a speed impact, I think we could emit "forwarding_send" or "forwarding_super" instructions and eliminate the code path. Also, as you noted on the PR most iseqs are "simple" and we already have to check that. Forwardable iseqs are just not considered "simple" 😆

I tried running vm_call benchmarks. I can't tell if there is any impact. Benchmark results are here

Updated by ko1 (Koichi Sasada) 7 months ago

My idea is simple because it is simple replacement with an array (and a hash) to contain arguments (I only proposed lightweight argument container than an array and hash).

This proposal breaks the assumption of VM stack structure. I'm afraid this kind of breakage can cause serious issue.
But I can misunderstand so let's talk at RubyKaigi, Okinawa with a whiteboard.

Updated by tenderlovemaking (Aaron Patterson) 7 months ago

ko1 (Koichi Sasada) wrote in #note-6:

My idea is simple because it is simple replacement with an array (and a hash) to contain arguments (I only proposed lightweight argument container than an array and hash).

This proposal breaks the assumption of VM stack structure. I'm afraid this kind of breakage can cause serious issue.

For what it's worth, we've tested this patch in Shopify CI and it's passing all tests. We might be able to try in production, but I need to ask some people.

But I can misunderstand so let's talk at RubyKaigi, Okinawa with a whiteboard.

Sure, we can discuss it at RubyKaigi. I agree your proposal would maintain stack layout when calling in to ... methods, but I don't think the code would be any more simple due to extra memory management / GC complexity. I was able to simplify the patch somewhat, so please take a look again.

I decided to test this against RailsBench, and this patch does speed up RailsBench (slightly).

Here is RailsBench with master:

$ bundle exec ruby benchmark.rb
ruby 3.4.0dev (2024-04-18T21:11:25Z master 64d0817ea9) [arm64-darwin23]
Command: bundle check 2> /dev/null || bundle install
The Gemfile's dependencies are satisfied
Command: bin/rails db:migrate db:seed
Using 100 posts in the database
itr #1: 1554ms
itr #2: 1519ms
itr #3: 1515ms
itr #4: 1553ms
itr #5: 1550ms
itr #6: 1526ms
itr #7: 1574ms
itr #8: 1522ms
itr #9: 1521ms
itr #10: 1529ms
itr #11: 1526ms
itr #12: 1550ms
itr #13: 1522ms
itr #14: 1551ms
itr #15: 1541ms
itr #16: 1538ms
itr #17: 1552ms
itr #18: 1536ms
itr #19: 1560ms
itr #20: 1549ms
itr #21: 1536ms
itr #22: 1529ms
itr #23: 1542ms
itr #24: 1502ms
itr #25: 1559ms
RSS: 139.1MiB
MAXRSS: 142640.0MiB
Writing file /Users/aaron/git/yjit-bench/benchmarks/railsbench/data/results-ruby-3.4.0-2024-04-18-143710.json
Average of last 10, non-warmup iters: 1540ms

Here is RailsBench with the ... optimization:

$ bundle exec ruby benchmark.rb
ruby 3.4.0dev (2024-04-18T21:20:23Z speed-forward 4d698e6d46) [arm64-darwin23]
Command: bundle check 2> /dev/null || bundle install
The Gemfile's dependencies are satisfied
Command: bin/rails db:migrate db:seed
Using 100 posts in the database
itr #1: 1537ms
itr #2: 1523ms
itr #3: 1495ms
itr #4: 1501ms
itr #5: 1520ms
itr #6: 1514ms
itr #7: 1514ms
itr #8: 1486ms
itr #9: 1524ms
itr #10: 1493ms
itr #11: 1472ms
itr #12: 1509ms
itr #13: 1497ms
itr #14: 1492ms
itr #15: 1500ms
itr #16: 1507ms
itr #17: 1526ms
itr #18: 1502ms
itr #19: 1505ms
itr #20: 1492ms
itr #21: 1501ms
itr #22: 1529ms
itr #23: 1519ms
itr #24: 1537ms
itr #25: 1499ms
RSS: 140.0MiB
MAXRSS: 143504.0MiB
Writing file /Users/aaron/git/yjit-bench/benchmarks/railsbench/data/results-ruby-3.4.0-2024-04-18-143623.json
Average of last 10, non-warmup iters: 1512ms

Average iteration decreases about 28ms. Basically similar results on my x86 machine.

master:

aaron@whiteclaw ~/g/y/b/railsbench (main)> bundle exec ruby benchmark.rb
ruby 3.4.0dev (2024-04-18T21:21:01Z master 6443d690ae) [x86_64-linux]
Command: bundle check 2> /dev/null || bundle install
The Gemfile's dependencies are satisfied
Command: bin/rails db:migrate db:seed
Using 100 posts in the database
itr #1: 2227ms
itr #2: 2173ms
itr #3: 2174ms
itr #4: 2171ms
itr #5: 2177ms
itr #6: 2171ms
itr #7: 2172ms
itr #8: 2171ms
itr #9: 2170ms
itr #10: 2173ms
itr #11: 2170ms
itr #12: 2173ms
itr #13: 2170ms
itr #14: 2171ms
itr #15: 2174ms
itr #16: 2171ms
itr #17: 2173ms
itr #18: 2170ms
itr #19: 2176ms
itr #20: 2169ms
itr #21: 2175ms
itr #22: 2169ms
itr #23: 2170ms
itr #24: 2173ms
itr #25: 2170ms
RSS: 110.0MiB
MAXRSS: 110.1MiB
Writing file /home/aaron/git/yjit-bench/benchmarks/railsbench/data/results-ruby-3.4.0-2024-04-18-150418.json
Average of last 10, non-warmup iters: 2171ms

This branch:

aaron@whiteclaw ~/g/y/b/railsbench (main)> bundle exec ruby benchmark.rb
ruby 3.4.0dev (2024-04-18T21:20:23Z speed-forward 4d698e6d46) [x86_64-linux]
Command: bundle check 2> /dev/null || bundle install
The Gemfile's dependencies are satisfied
Command: bin/rails db:migrate db:seed
Using 100 posts in the database
itr #1: 2199ms
itr #2: 2157ms
itr #3: 2158ms
itr #4: 2153ms
itr #5: 2156ms
itr #6: 2157ms
itr #7: 2155ms
itr #8: 2153ms
itr #9: 2152ms
itr #10: 2160ms
itr #11: 2153ms
itr #12: 2156ms
itr #13: 2153ms
itr #14: 2159ms
itr #15: 2154ms
itr #16: 2154ms
itr #17: 2157ms
itr #18: 2155ms
itr #19: 2158ms
itr #20: 2152ms
itr #21: 2156ms
itr #22: 2154ms
itr #23: 2153ms
itr #24: 2156ms
itr #25: 2151ms
RSS: 107.7MiB
MAXRSS: 107.8MiB
Writing file /home/aaron/git/yjit-bench/benchmarks/railsbench/data/results-ruby-3.4.0-2024-04-18-150520.json
Average of last 10, non-warmup iters: 2154ms

Maybe we could try merging this? We can revert if it causes problems. Anyway, I'm happy to discuss in Okinawa! 😄

Updated by tenderlovemaking (Aaron Patterson) 5 months ago

I uploaded my slides about this feature which I presented at the dev meeting in Okinawa here.

I've rerun the benchmarks from the slides, and the results are below. master-ruby is the master branch, fwd-ruby is the experimental branch. This branch only impacts send and invokesuper instructions so I've only included benchmarks which exercise those two instructions.

Forwarding positional parameters

def recv(a, b)
  a + b
end

def call(...)
  recv(...)
end

# def run
#   call(1, 2)
#   call(1, 2)
#   call(1, 2)
#   ...
eval "def run; " + 200.times.map {
  "call(1, 2)"
}.join("; ") + "; end"

200000.times do
  run
end

Results:

Benchmark 1: fwd-ruby/miniruby -v ruby/test.rb
  Time (mean ± σ):      1.241 s ±  0.022 s    [User: 1.226 s, System: 0.004 s]
  Range (min … max):    1.215 s …  1.294 s    10 runs
 
Benchmark 2: master-ruby/miniruby -v ruby/test.rb
  Time (mean ± σ):      2.834 s ±  0.009 s    [User: 2.800 s, System: 0.011 s]
  Range (min … max):    2.820 s …  2.846 s    10 runs
 
Summary
  fwd-ruby/miniruby -v ruby/test.rb ran
    2.28 ± 0.04 times faster than master-ruby/miniruby -v ruby/test.rb

Experimental branch is 2.28x faster.

Forwarding keyword parameters

def recv(a:, b:)
  a + b
end

def call(...)
  recv(...)
end

# def run
#   call(a: 1, b: 2)
#   call(a: 1, b: 2)
#   call(a: 1, b: 2)
#   call(a: 1, b: 2)
#   ...
eval "def run; " + 200.times.map {
  "call(a: 1, b: 2)"
}.join("; ") + "; end"

200000.times do
  run
end

Results

Benchmark 1: fwd-ruby/miniruby -v ruby/test.rb
  Time (mean ± σ):      1.530 s ±  0.017 s    [User: 1.511 s, System: 0.004 s]
  Range (min … max):    1.503 s …  1.558 s    10 runs
 
Benchmark 2: master-ruby/miniruby -v ruby/test.rb
  Time (mean ± σ):      5.027 s ±  0.036 s    [User: 4.969 s, System: 0.018 s]
  Range (min … max):    4.988 s …  5.102 s    10 runs
 
Summary
  fwd-ruby/miniruby -v ruby/test.rb ran
    3.29 ± 0.04 times faster than master-ruby/miniruby -v ruby/test.rb

Experimental branch is 3.29x faster.

send instruction that always misses inline cache

class A
  def a; end
end

class B < A; end

a = A.new
b = B.new

def call_method(obj)
  obj.a { } # Always send instruction
end

# def run(a, b)
#   call_method(a)
#   call_method(b)
#   call_method(a)
#   call_method(b)
#   ...
eval "def run(a, b); " + 200.times.map {
  "call_method(a); call_method(b)"
}.join("; ") + "; end"

200000.times do
  run(a, b)
end

Result:

$ hyperfine 'fwd-ruby/miniruby -v ruby/test.rb' 'master-ruby/miniruby -v ruby/test.rb'
Benchmark 1: fwd-ruby/miniruby -v ruby/test.rb
  Time (mean ± σ):      1.934 s ±  0.014 s    [User: 1.910 s, System: 0.005 s]
  Range (min … max):    1.916 s …  1.965 s    10 runs
 
Benchmark 2: master-ruby/miniruby -v ruby/test.rb
  Time (mean ± σ):      1.749 s ±  0.020 s    [User: 1.728 s, System: 0.005 s]
  Range (min … max):    1.734 s …  1.802 s    10 runs
 
Summary
  master-ruby/miniruby -v ruby/test.rb ran
    1.11 ± 0.01 times faster than fwd-ruby/miniruby -v ruby/test.rb

Experimental branch is about 11% slower

invokesuper instruction

Benchmark:

class A
  def a; end
end

class B < A;
  def a; super; end
end

b = B.new

def call_method(obj)
  obj.a # Calls invoke_super
end

# def run(b)
#   call_method(b)
#   call_method(b)
#   ...
eval "def run(b); " + 400.times.map {
  "call_method(b)"
}.join("; ") + "; end"

200000.times do
  run(b)
end

Result:

$ hyperfine 'fwd-ruby/miniruby -v ruby/test.rb' 'master-ruby/miniruby -v ruby/test.rb'
Benchmark 1: fwd-ruby/miniruby -v ruby/test.rb
  Time (mean ± σ):      2.460 s ±  0.077 s    [User: 2.432 s, System: 0.006 s]
  Range (min … max):    2.379 s …  2.629 s    10 runs
 
Benchmark 2: master-ruby/miniruby -v ruby/test.rb
  Time (mean ± σ):      2.173 s ±  0.011 s    [User: 2.148 s, System: 0.006 s]
  Range (min … max):    2.163 s …  2.201 s    10 runs
 
Summary
  master-ruby/miniruby -v ruby/test.rb ran
    1.13 ± 0.04 times faster than fwd-ruby/miniruby -v ruby/test.rb

Experimental branch is about 13% slower.

The experimental branch is a little slower on send and invokesuper instructions. If we're worried about that slowdown, we can introduce specialized "forward_send" and "forward_invokesuper" instructions which should eliminate overhead from these instructions. We're able to detect ... uses at compile time, so adding these instructions isn't a problem (I have a patch for it, but I want to get this feature merged first).

@ko1 (Koichi Sasada) Can you take a look again please? I'd like to merge this and we can fix any issues going forward.

Thanks!

Updated by ko1 (Koichi Sasada) 5 months ago · Edited

how about to introduce sendforward instruction (forwardsend?) rather than extending send to make send simple? because send will be used frequently, it should be simpler.

Updated by tenderlovemaking (Aaron Patterson) 5 months ago

ko1 (Koichi Sasada) wrote in #note-9:

how about to introduce sendforward instruction (forwardsend?) rather than extending send to make send simple? because send will be used frequently, it should be simpler.

Ok, that's fine. I'll add it to this patch. How about for invokesuper?

Updated by tenderlovemaking (Aaron Patterson) 5 months ago

@ko1 (Koichi Sasada) do you mind reviewing the patch again? I've address the comments you made. I added two new instructions (one for send and one for invokesuper). I also simplified insns.def so it should be clear why we need a write barrier in the new instructions.

Updated by ko1 (Koichi Sasada) 5 months ago

ok go ahead!

Updated by tenderlovemaking (Aaron Patterson) 5 months ago

ko1 (Koichi Sasada) wrote in #note-12:

ok go ahead!

Thank you!

Actions #14

Updated by tenderlovemaking (Aaron Patterson) 5 months ago

  • Status changed from Open to Closed
Actions

Also available in: Atom PDF

Like3
Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0Like0