Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8283689: Update the foreign linker VM implementation #7959

Closed
wants to merge 105 commits into from
Closed
Changes from 2 commits
Commits
Show all changes
105 commits
Select commit Hold shift + click to select a range
3f411d5
Initial push
mcimadamore Mar 16, 2022
6aa5f10
Revert to previous handshake logic
mcimadamore Mar 16, 2022
a3586cb
Revert changes to problem list
mcimadamore Mar 21, 2022
7403a78
Add TestUpcallStack to problem list
mcimadamore Mar 21, 2022
dac414c
Revert changes to scopedMemoryAccess.cpp
mcimadamore Mar 21, 2022
3fb495d
Fix writeOversized for booleans
mcimadamore Mar 21, 2022
6254192
Drop unused imports in Reflection.java
mcimadamore Mar 21, 2022
ef54894
Update copyright
mcimadamore Mar 21, 2022
8e6017d
Fix TestLayouts on 32-bits
mcimadamore Mar 21, 2022
6bb1b5c
Address review comments
mcimadamore Mar 22, 2022
3d3c30a
Simplify syslookup build changes
mcimadamore Mar 22, 2022
fc1fdb5
Add missing LIBS flag
mcimadamore Mar 22, 2022
4b2760d
rename syslookup lib on Windows
mcimadamore Mar 22, 2022
7ec71f7
Fix indentation in Lib.gmk
mcimadamore Mar 22, 2022
c9bc9a7
Drop redundant javadoc statements re. handling of nulls
mcimadamore Mar 23, 2022
acb3d4c
Update src/java.base/share/classes/java/lang/foreign/MemorySegment.java
mcimadamore Mar 24, 2022
b6aa681
Update src/java.base/share/classes/java/lang/foreign/MemorySegment.java
mcimadamore Mar 24, 2022
95f65ee
Update src/java.base/share/classes/java/lang/foreign/MemorySegment.java
mcimadamore Mar 24, 2022
3e8cfd7
Address review comments
mcimadamore Mar 24, 2022
4077608
Update src/java.base/share/classes/java/lang/foreign/SymbolLookup.java
mcimadamore Mar 24, 2022
6881b6d
Update src/java.base/share/classes/java/lang/foreign/SymbolLookup.java
mcimadamore Mar 24, 2022
d95c6d0
Address more review comments
mcimadamore Mar 24, 2022
6e7189b
Add --enable-preview to micro benchmark java options
mcimadamore Mar 24, 2022
504b564
Revert changes to RunTests.gmk
mcimadamore Mar 24, 2022
b86c658
Merge branch 'master' into foreign-preview
mcimadamore Mar 29, 2022
e557f6c
Drop support for Constable from MemoryLayout/FunctionDescriptor
mcimadamore Mar 29, 2022
55aee87
Use thread local storage to optimize attach of async threads
mcimadamore Mar 29, 2022
43dc6be
Switch to daemon threads for async upcalls
mcimadamore Mar 29, 2022
0bcc866
Fix bad usage of `@link` with primitive array types
mcimadamore Mar 30, 2022
af41a76
Tweak FunctionDescriptor::argumentLayouts to return an immutable list
mcimadamore Mar 30, 2022
247e5eb
Merge branch 'master' into foreign-preview
mcimadamore Mar 30, 2022
376af62
Make sure that layouts cannot overflow size
mcimadamore Mar 31, 2022
aeafdbe
* Cleanup `@throws` specification for get/setUtf8String
mcimadamore Mar 31, 2022
e17b6b5
Remove bogus line in javadoc of `MemoryLayout::structLayout`
mcimadamore Mar 31, 2022
f10791b
Tweak qualified reference to wrapOverflow
mcimadamore Apr 1, 2022
c1a7a6b
* Rename `CLinker` to `Linker`
mcimadamore Apr 4, 2022
0d03a0b
Forgot to add `CLinker` file
mcimadamore Apr 4, 2022
5b0aeb8
Address review comment
mcimadamore Apr 4, 2022
d21e792
Added UnsupportedOperationException on other functionalities (e.g. `V…
mcimadamore Apr 4, 2022
d84de51
Fix TestLinkToNativeRBP
mcimadamore Apr 4, 2022
c9afcd1
Fix UnrolledAccess benchmark
mcimadamore Apr 11, 2022
a68195a
Use a less expensive array element alignment check
mcimadamore Apr 11, 2022
66cebe7
Add @ForceInline annotation on session accessor
mcimadamore Apr 12, 2022
e01fcfe
Initial push
mcimadamore Apr 13, 2022
3a87df5
Drop `UnsupportedOperationException` from throws clause in FileChanne…
mcimadamore Apr 13, 2022
8637379
Remove whitespaces
mcimadamore Apr 13, 2022
1dc1e69
Add @see
mcimadamore Apr 14, 2022
857d046
Tweak support for array element var handle
mcimadamore Apr 14, 2022
2e3d57a
Add ValueLayout changes
mcimadamore Apr 14, 2022
746de5c
Merge branch 'master' into foreign-preview
mcimadamore Apr 27, 2022
6990183
Address CSR comments
mcimadamore Apr 27, 2022
5866bbb
Simplify libraryLookup impl
mcimadamore Apr 27, 2022
945317e
Downcall and upcall tests use wrong layout which is missing some trai…
mcimadamore Apr 28, 2022
d1fcf01
Tweak Addressable javadoc
mcimadamore Apr 29, 2022
6fdb96e
Tweak support for Linker lookup
mcimadamore May 3, 2022
dc309e8
Update src/java.base/share/classes/jdk/internal/misc/X-ScopedMemoryAc…
mcimadamore May 3, 2022
41d055a
Merge branch 'master' into foreign-preview
mcimadamore May 3, 2022
853f06b
Tweak examples in SymbolLookup javadoc
mcimadamore May 4, 2022
4d24ffa
* Clarify what happens when SymbolLookup::loaderLookup is invoked fro…
mcimadamore May 5, 2022
b71c4e9
Add tests for loaderLookup/restricted method corner cases
mcimadamore May 6, 2022
f823bf8
Update src/java.base/share/classes/jdk/internal/reflect/Reflection.java
mcimadamore May 6, 2022
f6c7bab
8274912: Eagerly generate native invokers
JornVernee Oct 15, 2021
9900ad0
8255902: Enable stack arguments for native invokers & optimized upcal…
JornVernee Oct 20, 2021
a6c1bf3
8255903: Enable multi-register return values for native invokers
JornVernee Nov 2, 2021
309998e
8275646: Implement optimized upcall stubs on AArch64
nick-arm Nov 9, 2021
9d9332f
8276987: Optimized upcall stub should be printed with -XX:+PrintStubCode
nick-arm Nov 15, 2021
0d98a90
8275647: Enable multi-register return values for optimized upcall stubs
JornVernee Nov 18, 2021
b5b2c95
8276647: Remove C2 trivial call intrinsification support
JornVernee Jan 31, 2022
3541a19
8277657: OptimizedEntryBlob should extend RuntimeBlob directly
JornVernee Nov 25, 2021
f749da3
8280721: Rewrite binding of foreign classes to use javaClasses.h/cpp
JornVernee Feb 21, 2022
5c1246e
8277601: OptimizedEntryBlob should implement preserve_callee_argument…
JornVernee Feb 22, 2022
1217d51
Fix other platforms
JornVernee Mar 17, 2022
eae0968
Fix other platforms, take 2
JornVernee Mar 28, 2022
bb45604
Replace TraceNativeInvokers flag with unified logging
JornVernee Mar 28, 2022
602ffc0
Polish
JornVernee Mar 28, 2022
7ba6061
Pass pointer to LogStream
JornVernee Mar 28, 2022
0445349
Remove spurious ProblemList change
JornVernee Mar 28, 2022
cb2fa54
Update riscv and arm stubs
JornVernee Apr 1, 2022
8c9a5e3
8284072: foreign/StdLibTest.java randomly crashes on MacOS/AArch64
nick-arm Apr 1, 2022
253a505
Remove comment about native calls in lcm.cpp
JornVernee Apr 5, 2022
e84e337
Remove unneeded ComputeMoveOrder
JornVernee Apr 12, 2022
3c88a2e
Merge branch 'master' into foreign-preview
mcimadamore May 9, 2022
43fd1b9
Merge branch 'foreign-preview-m' into JEP-19-VM-IMPL2
JornVernee May 9, 2022
7b77429
Fix crashes in heap segment benchmarks due to misaligned access
mcimadamore May 9, 2022
cdd006e
Merge branch 'master' into foreign-preview
mcimadamore May 11, 2022
7854fb6
Address some review comments
JornVernee May 11, 2022
abd2b6c
Migrate to GrowableArray
JornVernee May 11, 2022
c6754a1
Fix use of rt_call
JornVernee May 11, 2022
08f35e3
Merge branch 'foreign-preview-m' into JEP-19-VM-IMPL2
JornVernee May 11, 2022
b29ad8f
Block async exceptions during upcalls
JornVernee May 11, 2022
1c04a42
Revert "Block async exceptions during upcalls"
JornVernee May 12, 2022
9a7bb6b
Update Oracle copyright years
JornVernee May 12, 2022
8100e0a
Missed 2 years
JornVernee May 12, 2022
0f49ff0
Apply copyright year updates per request of @nick-arm
JornVernee May 12, 2022
b113aaa
Fix overwritten copyright years.
JornVernee May 12, 2022
aab2d15
Merge branch 'JEP-19-VM-IMPL2' of https://github.com/JornVernee/jdk i…
JornVernee May 12, 2022
f961121
Undo spurious changes.
JornVernee May 12, 2022
f55b6c5
Merge branch 'master' into JEP-19-VM-IMPL2
JornVernee May 12, 2022
e0fd8f0
fix space
JornVernee May 13, 2022
2ea5bc9
indentation
JornVernee May 13, 2022
ff8835e
Fix failure with SPEC disabled (accidentally dropped change)
JornVernee May 16, 2022
d611f36
Cleanup UL usage
JornVernee May 16, 2022
406f3e8
Missing ASSERT -> NOT_PRODUCT
JornVernee May 16, 2022
c3abb73
ifdef NOT_PRODUCT -> ifndef PRODUCT
JornVernee May 17, 2022
c3c1421
Merge branch 'master' into JEP-19-VM-IMPL2
JornVernee May 17, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions src/hotspot/cpu/aarch64/foreign_globals_aarch64.cpp
Original file line number Diff line number Diff line change
@@ -52,16 +52,16 @@ const ABIDescriptor ForeignGlobals::parse_abi_descriptor(jobject jabi) {
constexpr Register (*to_Register)(int) = as_Register;

objArrayOop inputStorage = jdk_internal_foreign_abi_ABIDescriptor::inputStorage(abi_oop);
loadArray(inputStorage, INTEGER_TYPE, abi._integer_argument_registers, to_Register);
loadArray(inputStorage, VECTOR_TYPE, abi._vector_argument_registers, as_FloatRegister);
parse_register_array(inputStorage, INTEGER_TYPE, abi._integer_argument_registers, to_Register);
parse_register_array(inputStorage, VECTOR_TYPE, abi._vector_argument_registers, as_FloatRegister);

objArrayOop outputStorage = jdk_internal_foreign_abi_ABIDescriptor::outputStorage(abi_oop);
loadArray(outputStorage, INTEGER_TYPE, abi._integer_return_registers, to_Register);
loadArray(outputStorage, VECTOR_TYPE, abi._vector_return_registers, as_FloatRegister);
parse_register_array(outputStorage, INTEGER_TYPE, abi._integer_return_registers, to_Register);
parse_register_array(outputStorage, VECTOR_TYPE, abi._vector_return_registers, as_FloatRegister);

objArrayOop volatileStorage = jdk_internal_foreign_abi_ABIDescriptor::volatileStorage(abi_oop);
loadArray(volatileStorage, INTEGER_TYPE, abi._integer_additional_volatile_registers, to_Register);
loadArray(volatileStorage, VECTOR_TYPE, abi._vector_additional_volatile_registers, as_FloatRegister);
parse_register_array(volatileStorage, INTEGER_TYPE, abi._integer_additional_volatile_registers, to_Register);
parse_register_array(volatileStorage, VECTOR_TYPE, abi._vector_additional_volatile_registers, as_FloatRegister);

abi._stack_alignment_bytes = jdk_internal_foreign_abi_ABIDescriptor::stackAlignment(abi_oop);
abi._shadow_space_bytes = jdk_internal_foreign_abi_ABIDescriptor::shadowSpace(abi_oop);
8 changes: 2 additions & 6 deletions src/hotspot/cpu/aarch64/frame_aarch64.cpp
Original file line number Diff line number Diff line change
@@ -376,7 +376,7 @@ OptimizedEntryBlob::FrameData* OptimizedEntryBlob::frame_data_for_frame(const fr
assert(frame.is_optimized_entry_frame(), "wrong frame");
// need unextended_sp here, since normal sp is wrong for interpreter callees
return reinterpret_cast<OptimizedEntryBlob::FrameData*>(
reinterpret_cast<char*>(frame.unextended_sp()) + in_bytes(_frame_data_offset));
reinterpret_cast<address>(frame.unextended_sp()) + in_bytes(_frame_data_offset));
}

bool frame::optimized_entry_frame_is_first() const {
@@ -396,13 +396,9 @@ frame frame::sender_for_optimized_entry_frame(RegisterMap* map) const {
assert(jfa->last_Java_sp() > sp(), "must be above this frame on stack");
// Since we are walking the stack now this nested anchor is obviously walkable
// even if it wasn't when it was stacked.
if (!jfa->walkable()) {
// Capture _last_Java_pc (if needed) and mark anchor walkable.
jfa->capture_last_Java_pc();
}
jfa->make_walkable();
map->clear();
assert(map->include_argument_oops(), "should be set by clear");
vmassert(jfa->last_Java_pc() != NULL, "not walkable");
frame fr(jfa->last_Java_sp(), jfa->last_Java_fp(), jfa->last_Java_pc());

return fr;
8 changes: 3 additions & 5 deletions src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
Original file line number Diff line number Diff line change
@@ -5528,9 +5528,9 @@ static int reg2offset_out(VMReg r) {
return (r->reg2stack() + SharedRuntime::out_preserve_stack_slots()) * VMRegImpl::stack_slot_size;
}

// On 64 bit we will store integer like items to the stack as
// 64 bits items (Aarch64 abi) even though java would only store
// 32bits for a parameter. On 32bit it will simply be 32 bits
// On64 bit we will store integer like items to the stack as
// 64bits items (AArch64 ABI) even though java would only store
// 32bits for a parameter. On 32bit it will simply be 32bits
// So this routine will do 32->32 on 32bit and 32->64 on 64bit
void MacroAssembler::move32_64(VMRegPair src, VMRegPair dst, Register tmp) {
if (src.first()->is_stack()) {
@@ -5544,8 +5544,6 @@ void MacroAssembler::move32_64(VMRegPair src, VMRegPair dst, Register tmp) {
}
} else if (dst.first()->is_stack()) {
// reg to stack
// Do we really have to sign extend???
// __ movslq(src.first()->as_Register(), src.first()->as_Register());
str(src.first()->as_Register(), Address(sp, reg2offset_out(dst.first())));
} else {
if (dst.first() != src.first()) {
4 changes: 2 additions & 2 deletions src/hotspot/cpu/aarch64/universalNativeInvoker_aarch64.cpp
Original file line number Diff line number Diff line change
@@ -137,8 +137,8 @@ void NativeInvokerGenerator::generate() {
Register tmp2 = r10;

Register shuffle_reg = r19;
JavaCallConv in_conv;
NativeCallConv out_conv(_input_registers);
JavaCallingConvention in_conv;
NativeCallingConvention out_conv(_input_registers);
ArgumentShuffle arg_shuffle(_signature, _num_args, _signature, _num_args, &in_conv, &out_conv, shuffle_reg->as_VMReg());

#ifdef ASSERT
18 changes: 9 additions & 9 deletions src/hotspot/cpu/aarch64/universalUpcallHandler_aarch64.cpp
Original file line number Diff line number Diff line change
@@ -127,8 +127,8 @@ address ProgrammableUpcallHandler::generate_optimized_upcall_stub(jobject receiv
CodeBuffer buffer("upcall_stub_linkToNative", /* code_size = */ 2048, /* locs_size = */ 1024);

Register shuffle_reg = r19;
JavaCallConv out_conv;
NativeCallConv in_conv(call_regs._arg_regs, call_regs._args_length);
JavaCallingConvention out_conv;
NativeCallingConvention in_conv(call_regs._arg_regs);
ArgumentShuffle arg_shuffle(in_sig_bt, total_in_args, out_sig_bt, total_out_args, &in_conv, &out_conv, shuffle_reg->as_VMReg());
int stack_slots = SharedRuntime::out_preserve_stack_slots() + arg_shuffle.out_arg_stack_slots();
int out_arg_area = align_up(stack_slots * VMRegImpl::stack_slot_size, StackAlignmentInBytes);
@@ -149,8 +149,8 @@ address ProgrammableUpcallHandler::generate_optimized_upcall_stub(jobject receiv
}

int reg_save_area_size = compute_reg_save_area_size(abi);
RegSpiller arg_spilller(call_regs._arg_regs, call_regs._args_length);
RegSpiller result_spiller(call_regs._ret_regs, call_regs._rets_length);
RegSpiller arg_spilller(call_regs._arg_regs);
RegSpiller result_spiller(call_regs._ret_regs);

int shuffle_area_offset = 0;
int res_save_area_offset = shuffle_area_offset + out_arg_area;
@@ -239,7 +239,7 @@ address ProgrammableUpcallHandler::generate_optimized_upcall_stub(jobject receiv
// return value shuffle
if (!needs_return_buffer) {
#ifdef ASSERT
if (call_regs._rets_length == 1) { // 0 or 1
if (call_regs._ret_regs.length() == 1) { // 0 or 1
VMReg j_expected_result_reg;
switch (ret_type) {
case T_BOOLEAN:
@@ -259,16 +259,16 @@ address ProgrammableUpcallHandler::generate_optimized_upcall_stub(jobject receiv
}
// No need to move for now, since CallArranger can pick a return type
// that goes in the same reg for both CCs. But, at least assert they are the same
assert(call_regs._ret_regs[0] == j_expected_result_reg,
"unexpected result register: %s != %s", call_regs._ret_regs[0]->name(), j_expected_result_reg->name());
assert(call_regs._ret_regs.at(0) == j_expected_result_reg,
"unexpected result register: %s != %s", call_regs._ret_regs.at(0)->name(), j_expected_result_reg->name());
}
#endif
} else {
assert(ret_buf_offset != -1, "no return buffer allocated");
__ lea(rscratch1, Address(sp, ret_buf_offset));
int offset = 0;
for (int i = 0; i < call_regs._rets_length; i++) {
VMReg reg = call_regs._ret_regs[i];
for (int i = 0; i < call_regs._ret_regs.length(); i++) {
VMReg reg = call_regs._ret_regs.at(i);
if (reg->is_Register()) {
__ ldr(reg->as_Register(), Address(rscratch1, offset));
offset += 8;
4 changes: 0 additions & 4 deletions src/hotspot/cpu/x86/foreign_globals_x86.hpp
Original file line number Diff line number Diff line change
@@ -27,10 +27,6 @@
#include "asm/macroAssembler.hpp"
#include "utilities/growableArray.hpp"

class outputStream;

constexpr size_t xmm_reg_size = 16; // size of XMM reg

struct ABIDescriptor {
GrowableArray<Register> _integer_argument_registers;
GrowableArray<Register> _integer_return_registers;
29 changes: 0 additions & 29 deletions src/hotspot/cpu/x86/foreign_globals_x86_32.hpp

This file was deleted.

12 changes: 6 additions & 6 deletions src/hotspot/cpu/x86/foreign_globals_x86_64.cpp
Original file line number Diff line number Diff line change
@@ -49,18 +49,18 @@ const ABIDescriptor ForeignGlobals::parse_abi_descriptor(jobject jabi) {
ABIDescriptor abi;

objArrayOop inputStorage = jdk_internal_foreign_abi_ABIDescriptor::inputStorage(abi_oop);
loadArray(inputStorage, INTEGER_TYPE, abi._integer_argument_registers, as_Register);
loadArray(inputStorage, VECTOR_TYPE, abi._vector_argument_registers, as_XMMRegister);
parse_register_array(inputStorage, INTEGER_TYPE, abi._integer_argument_registers, as_Register);
parse_register_array(inputStorage, VECTOR_TYPE, abi._vector_argument_registers, as_XMMRegister);

objArrayOop outputStorage = jdk_internal_foreign_abi_ABIDescriptor::outputStorage(abi_oop);
loadArray(outputStorage, INTEGER_TYPE, abi._integer_return_registers, as_Register);
loadArray(outputStorage, VECTOR_TYPE, abi._vector_return_registers, as_XMMRegister);
parse_register_array(outputStorage, INTEGER_TYPE, abi._integer_return_registers, as_Register);
parse_register_array(outputStorage, VECTOR_TYPE, abi._vector_return_registers, as_XMMRegister);
objArrayOop subarray = oop_cast<objArrayOop>(outputStorage->obj_at(X87_TYPE));
abi._X87_return_registers_noof = subarray->length();

objArrayOop volatileStorage = jdk_internal_foreign_abi_ABIDescriptor::volatileStorage(abi_oop);
loadArray(volatileStorage, INTEGER_TYPE, abi._integer_additional_volatile_registers, as_Register);
loadArray(volatileStorage, VECTOR_TYPE, abi._vector_additional_volatile_registers, as_XMMRegister);
parse_register_array(volatileStorage, INTEGER_TYPE, abi._integer_additional_volatile_registers, as_Register);
parse_register_array(volatileStorage, VECTOR_TYPE, abi._vector_additional_volatile_registers, as_XMMRegister);

abi._stack_alignment_bytes = jdk_internal_foreign_abi_ABIDescriptor::stackAlignment(abi_oop);
abi._shadow_space_bytes = jdk_internal_foreign_abi_ABIDescriptor::shadowSpace(abi_oop);
54 changes: 0 additions & 54 deletions src/hotspot/cpu/x86/foreign_globals_x86_64.hpp

This file was deleted.

2 changes: 1 addition & 1 deletion src/hotspot/cpu/x86/frame_x86.cpp
Original file line number Diff line number Diff line change
@@ -370,7 +370,7 @@ OptimizedEntryBlob::FrameData* OptimizedEntryBlob::frame_data_for_frame(const fr
assert(frame.is_optimized_entry_frame(), "wrong frame");
// need unextended_sp here, since normal sp is wrong for interpreter callees
return reinterpret_cast<OptimizedEntryBlob::FrameData*>(
reinterpret_cast<char*>(frame.unextended_sp()) + in_bytes(_frame_data_offset));
reinterpret_cast<address>(frame.unextended_sp()) + in_bytes(_frame_data_offset));
}

bool frame::optimized_entry_frame_is_first() const {
2 changes: 1 addition & 1 deletion src/hotspot/cpu/x86/macroAssembler_x86.cpp
Original file line number Diff line number Diff line change
@@ -930,7 +930,7 @@ void MacroAssembler::long_move(VMRegPair src, VMRegPair dst, Register tmp, int i
}
} else {
assert(dst.is_single_reg(), "not a stack pair: (%s, %s), (%s, %s)",
src.first()->name(), src.second()->name(), dst.first()->name(), dst.second()->name());
src.first()->name(), src.second()->name(), dst.first()->name(), dst.second()->name());
movq(Address(rsp, reg2offset_out(dst.first()) + out_stk_bias), src.first()->as_Register());
}
} else if (dst.is_single_phys_reg()) {
2 changes: 0 additions & 2 deletions src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp
Original file line number Diff line number Diff line change
@@ -33,15 +33,13 @@
#include "code/icBuffer.hpp"
#include "code/nativeInst.hpp"
#include "code/vtableStubs.hpp"
#include "compiler/disassembler.hpp"
#include "compiler/oopMap.hpp"
#include "gc/shared/collectedHeap.hpp"
#include "gc/shared/gcLocker.hpp"
#include "gc/shared/barrierSet.hpp"
#include "gc/shared/barrierSetAssembler.hpp"
#include "interpreter/interpreter.hpp"
#include "logging/log.hpp"
#include "logging/logStream.hpp"
#include "memory/resourceArea.hpp"
#include "memory/universe.hpp"
#include "oops/compiledICHolder.hpp"
4 changes: 2 additions & 2 deletions src/hotspot/cpu/x86/universalNativeInvoker_x86_64.cpp
Original file line number Diff line number Diff line change
@@ -133,8 +133,8 @@ void NativeInvokerGenerator::generate() {
};

Register shufffle_reg = rbx;
JavaCallConv in_conv;
NativeCallConv out_conv(_input_registers);
JavaCallingConvention in_conv;
NativeCallingConvention out_conv(_input_registers);
ArgumentShuffle arg_shuffle(_signature, _num_args, _signature, _num_args, &in_conv, &out_conv, shufffle_reg->as_VMReg());

#ifdef ASSERT
18 changes: 9 additions & 9 deletions src/hotspot/cpu/x86/universalUpcallHandler_x86_64.cpp
Original file line number Diff line number Diff line change
@@ -179,8 +179,8 @@ address ProgrammableUpcallHandler::generate_optimized_upcall_stub(jobject receiv
CodeBuffer buffer("upcall_stub_linkToNative", /* code_size = */ 2048, /* locs_size = */ 1024);

Register shuffle_reg = rbx;
JavaCallConv out_conv;
NativeCallConv in_conv(call_regs._arg_regs, call_regs._args_length);
JavaCallingConvention out_conv;
NativeCallingConvention in_conv(call_regs._arg_regs);
ArgumentShuffle arg_shuffle(in_sig_bt, total_in_args, out_sig_bt, total_out_args, &in_conv, &out_conv, shuffle_reg->as_VMReg());
int stack_slots = SharedRuntime::out_preserve_stack_slots() + arg_shuffle.out_arg_stack_slots();
int out_arg_area = align_up(stack_slots * VMRegImpl::stack_slot_size, StackAlignmentInBytes);
@@ -201,8 +201,8 @@ address ProgrammableUpcallHandler::generate_optimized_upcall_stub(jobject receiv
}

int reg_save_area_size = compute_reg_save_area_size(abi);
RegSpiller arg_spilller(call_regs._arg_regs, call_regs._args_length);
RegSpiller result_spiller(call_regs._ret_regs, call_regs._rets_length);
RegSpiller arg_spilller(call_regs._arg_regs);
RegSpiller result_spiller(call_regs._ret_regs);

int shuffle_area_offset = 0;
int res_save_area_offset = shuffle_area_offset + out_arg_area;
@@ -295,7 +295,7 @@ address ProgrammableUpcallHandler::generate_optimized_upcall_stub(jobject receiv
// return value shuffle
if (!needs_return_buffer) {
#ifdef ASSERT
if (call_regs._rets_length == 1) { // 0 or 1
if (call_regs._ret_regs.length() == 1) { // 0 or 1
VMReg j_expected_result_reg;
switch (ret_type) {
case T_BOOLEAN:
@@ -315,16 +315,16 @@ address ProgrammableUpcallHandler::generate_optimized_upcall_stub(jobject receiv
}
// No need to move for now, since CallArranger can pick a return type
// that goes in the same reg for both CCs. But, at least assert they are the same
assert(call_regs._ret_regs[0] == j_expected_result_reg,
"unexpected result register: %s != %s", call_regs._ret_regs[0]->name(), j_expected_result_reg->name());
assert(call_regs._ret_regs.at(0) == j_expected_result_reg,
"unexpected result register: %s != %s", call_regs._ret_regs.at(0)->name(), j_expected_result_reg->name());
}
#endif
} else {
assert(ret_buf_offset != -1, "no return buffer allocated");
__ lea(rscratch1, Address(rsp, ret_buf_offset));
int offset = 0;
for (int i = 0; i < call_regs._rets_length; i++) {
VMReg reg = call_regs._ret_regs[i];
for (int i = 0; i < call_regs._ret_regs.length(); i++) {
VMReg reg = call_regs._ret_regs.at(i);
if (reg->is_Register()) {
__ movptr(reg->as_Register(), Address(rscratch1, offset));
offset += 8;
2 changes: 1 addition & 1 deletion src/hotspot/share/c1/c1_GraphBuilder.cpp
Original file line number Diff line number Diff line change
@@ -4227,7 +4227,7 @@ bool GraphBuilder::try_method_handle_inline(ciMethod* callee, bool ignore_return
break;

case vmIntrinsics::_linkToNative:
print_inlining(callee, "Native call", /*success*/ false);
print_inlining(callee, "native call", /*success*/ false);
break;

default:
121 changes: 65 additions & 56 deletions src/hotspot/share/prims/foreign_globals.cpp
Original file line number Diff line number Diff line change
@@ -34,19 +34,16 @@ const CallRegs ForeignGlobals::parse_call_regs(jobject jconv) {
oop conv_oop = JNIHandles::resolve_non_null(jconv);
objArrayOop arg_regs_oop = jdk_internal_foreign_abi_CallConv::argRegs(conv_oop);
objArrayOop ret_regs_oop = jdk_internal_foreign_abi_CallConv::retRegs(conv_oop);
CallRegs result;
result._args_length = arg_regs_oop->length();
result._arg_regs = NEW_RESOURCE_ARRAY(VMReg, result._args_length);
int num_args = arg_regs_oop->length();
int num_rets = ret_regs_oop->length();
CallRegs result(num_args, num_rets);

result._rets_length = ret_regs_oop->length();
result._ret_regs = NEW_RESOURCE_ARRAY(VMReg, result._rets_length);

for (int i = 0; i < result._args_length; i++) {
result._arg_regs[i] = parse_vmstorage(arg_regs_oop->obj_at(i));
for (int i = 0; i < num_args; i++) {
result._arg_regs.push(parse_vmstorage(arg_regs_oop->obj_at(i)));
}

for (int i = 0; i < result._rets_length; i++) {
result._ret_regs[i] = parse_vmstorage(ret_regs_oop->obj_at(i));
for (int i = 0; i < num_rets; i++) {
result._ret_regs.push(parse_vmstorage(ret_regs_oop->obj_at(i)));
}

return result;
@@ -58,19 +55,19 @@ VMReg ForeignGlobals::parse_vmstorage(oop storage) {
return vmstorage_to_vmreg(type, index);
}

int RegSpiller::compute_spill_area(const VMReg* regs, int num_regs) {
int RegSpiller::compute_spill_area(const GrowableArray<VMReg>& regs) {
int result_size = 0;
for (int i = 0; i < num_regs; i++) {
result_size += pd_reg_size(regs[i]);
for (int i = 0; i < regs.length(); i++) {
result_size += pd_reg_size(regs.at(i));
}
return result_size;
}

void RegSpiller::generate(MacroAssembler* masm, int rsp_offset, bool spill) const {
assert(rsp_offset != -1, "rsp_offset should be set");
int offset = rsp_offset;
for (int i = 0; i < _num_regs; i++) {
VMReg reg = _regs[i];
for (int i = 0; i < _regs.length(); i++) {
VMReg reg = _regs.at(i);
if (spill) {
pd_store_reg(masm, offset, reg);
} else {
@@ -102,7 +99,7 @@ void ArgumentShuffle::print_on(outputStream* os) const {
os->print_cr("}");
}

int NativeCallConv::calling_convention(BasicType* sig_bt, VMRegPair* out_regs, int num_args) const {
int NativeCallingConvention::calling_convention(BasicType* sig_bt, VMRegPair* out_regs, int num_args) const {
int src_pos = 0;
int stk_slots = 0;
for (int i = 0; i < num_args; i++) {
@@ -113,8 +110,8 @@ int NativeCallConv::calling_convention(BasicType* sig_bt, VMRegPair* out_regs, i
case T_SHORT:
case T_INT:
case T_FLOAT: {
assert(src_pos < _input_regs_length, "oob");
VMReg reg = _input_regs[src_pos++];
assert(src_pos < _input_regs.length(), "oob");
VMReg reg = _input_regs.at(src_pos++);
out_regs[i].set1(reg);
if (reg->is_stack())
stk_slots += 2;
@@ -123,8 +120,8 @@ int NativeCallConv::calling_convention(BasicType* sig_bt, VMRegPair* out_regs, i
case T_LONG:
case T_DOUBLE: {
assert((i + 1) < num_args && sig_bt[i + 1] == T_VOID, "expecting half");
assert(src_pos < _input_regs_length, "oob");
VMReg reg = _input_regs[src_pos++];
assert(src_pos < _input_regs.length(), "oob");
VMReg reg = _input_regs.at(src_pos++);
out_regs[i].set2(reg);
if (reg->is_stack())
stk_slots += 2;
@@ -142,32 +139,24 @@ int NativeCallConv::calling_convention(BasicType* sig_bt, VMRegPair* out_regs, i
return stk_slots;
}

// based on ComputeMoveOrder from x86_64 shared runtime code.
// with some changes.
class ForeignCMO: public StackObj {
class ComputeMoveOrder: public StackObj {
class MoveOperation: public ResourceObj {
friend class ForeignCMO;
friend class ComputeMoveOrder;
private:
VMRegPair _src;
VMRegPair _dst;
bool _processed;
MoveOperation* _next;
MoveOperation* _prev;
BasicType _bt;
VMRegPair _src;
VMRegPair _dst;
bool _processed;
MoveOperation* _next;
MoveOperation* _prev;
BasicType _bt;

static int get_id(VMRegPair r) {
return r.first()->value();
}

public:
MoveOperation(VMRegPair src, VMRegPair dst, BasicType bt):
_src(src)
, _dst(dst)
, _processed(false)
, _next(NULL)
, _prev(NULL)
, _bt(bt) {
}
MoveOperation(VMRegPair src, VMRegPair dst, BasicType bt)
: _src(src), _dst(dst), _processed(false), _next(NULL), _prev(NULL), _bt(bt) {}

int src_id() const { return get_id(_src); }
int dst_id() const { return get_id(_dst); }
@@ -210,25 +199,41 @@ class ForeignCMO: public StackObj {
};

private:
int _total_in_args;
const VMRegPair* _in_regs;
int _total_out_args;
const VMRegPair* _out_regs;
const BasicType* _in_sig_bt;
VMRegPair _tmp_vmreg;
GrowableArray<MoveOperation*> _edges;
GrowableArray<Move> _moves;

public:
ForeignCMO(int total_in_args, const VMRegPair* in_regs, int total_out_args, VMRegPair* out_regs,
const BasicType* in_sig_bt, VMRegPair tmp_vmreg) : _edges(total_in_args), _moves(total_in_args) {
assert(total_out_args >= total_in_args, "can only add prefix args");
ComputeMoveOrder(int total_in_args, const VMRegPair* in_regs, int total_out_args, VMRegPair* out_regs,
const BasicType* in_sig_bt, VMRegPair tmp_vmreg) :
_total_in_args(total_in_args),
_in_regs(in_regs),
_total_out_args(total_out_args),
_out_regs(out_regs),
_in_sig_bt(in_sig_bt),
_tmp_vmreg(tmp_vmreg),
_edges(total_in_args),
_moves(total_in_args) {
}

void compute() {
assert(_total_out_args >= _total_in_args, "can only add prefix args");
// Note that total_out_args args can be greater than total_in_args in the case of upcalls.
// There will be a leading MH receiver arg in the out args in that case.
//
// Leading args in the out args will be ignored below because we iterate from the end of
// the register arrays until !(in_idx >= 0), and total_in_args is smaller.
//
// Stub code adds a move for the receiver to j_rarg0 (and potential other prefix args) manually.
for (int in_idx = total_in_args - 1, out_idx = total_out_args - 1; in_idx >= 0; in_idx--, out_idx--) {
BasicType bt = in_sig_bt[in_idx];
for (int in_idx = _total_in_args - 1, out_idx = _total_out_args - 1; in_idx >= 0; in_idx--, out_idx--) {
BasicType bt = _in_sig_bt[in_idx];
assert(bt != T_ARRAY, "array not expected");
VMRegPair in_reg = in_regs[in_idx];
VMRegPair out_reg = out_regs[out_idx];
VMRegPair in_reg = _in_regs[in_idx];
VMRegPair out_reg = _out_regs[out_idx];

if (out_reg.first()->is_stack()) {
// Move operations where the dest is the stack can all be
@@ -249,7 +254,7 @@ class ForeignCMO: public StackObj {
}
// Break any cycles in the register moves and emit the in the
// proper order.
compute_store_order(tmp_vmreg);
compute_store_order(_tmp_vmreg);
}

// Walk the edges breaking cycles between moves. The result list
@@ -298,8 +303,13 @@ class ForeignCMO: public StackObj {
}
}

GrowableArray<Move> moves() {
return _moves;
public:
static GrowableArray<Move> compute_move_order(int total_in_args, const VMRegPair* in_regs,
int total_out_args, VMRegPair* out_regs,
const BasicType* in_sig_bt, VMRegPair tmp_vmreg) {
ComputeMoveOrder cmo(total_in_args, in_regs, total_out_args, out_regs, in_sig_bt, tmp_vmreg);
cmo.compute();
return cmo._moves;
}
};

@@ -308,8 +318,8 @@ ArgumentShuffle::ArgumentShuffle(
int num_in_args,
BasicType* out_sig_bt,
int num_out_args,
const CallConvClosure* input_conv,
const CallConvClosure* output_conv,
const CallingConventionClosure* input_conv,
const CallingConventionClosure* output_conv,
VMReg shuffle_temp) {

VMRegPair* in_regs = NEW_RESOURCE_ARRAY(VMRegPair, num_in_args);
@@ -322,13 +332,12 @@ ArgumentShuffle::ArgumentShuffle(
tmp_vmreg.set2(shuffle_temp);

// Compute a valid move order, using tmp_vmreg to break any cycles.
// Note that ForeignCMO ignores the upper half of our VMRegPairs.
// Note that ComputeMoveOrder ignores the upper half of our VMRegPairs.
// We are not moving Java values here, only register-sized values,
// so we shouldn't have to worry about the upper half any ways.
// This should work fine on 32-bit as well, since we would only be
// moving 32-bit sized values (i.e. low-level MH shouldn't take any double/long).
ForeignCMO order(num_in_args, in_regs,
num_out_args, out_regs,
in_sig_bt, tmp_vmreg);
_moves = order.moves();
_moves = ComputeMoveOrder::compute_move_order(num_in_args, in_regs,
num_out_args, out_regs,
in_sig_bt, tmp_vmreg);
}
40 changes: 15 additions & 25 deletions src/hotspot/share/prims/foreign_globals.hpp
Original file line number Diff line number Diff line change
@@ -32,23 +32,23 @@

#include CPU_HEADER(foreign_globals)

class CallConvClosure {
class CallingConventionClosure {
public:
virtual int calling_convention(BasicType* sig_bt, VMRegPair* regs, int num_args) const = 0;
};

struct CallRegs {
VMReg* _arg_regs;
int _args_length;
GrowableArray<VMReg> _arg_regs;
GrowableArray<VMReg> _ret_regs;

VMReg* _ret_regs;
int _rets_length;
CallRegs(int num_args, int num_rets)
: _arg_regs(num_args), _ret_regs(num_rets) {}
};

class ForeignGlobals {
private:
template<typename T, typename Func>
static void loadArray(objArrayOop jarray, int type_index, GrowableArray<T>& array, Func converter);
static void parse_register_array(objArrayOop jarray, int type_index, GrowableArray<T>& array, Func converter);

public:
static const ABIDescriptor parse_abi_descriptor(jobject jabi);
@@ -59,45 +59,35 @@ class ForeignGlobals {



class JavaCallConv : public CallConvClosure {
class JavaCallingConvention : public CallingConventionClosure {
public:
int calling_convention(BasicType* sig_bt, VMRegPair* regs, int num_args) const override {
return SharedRuntime::java_calling_convention(sig_bt, regs, num_args);
}
};

class NativeCallConv : public CallConvClosure {
const VMReg* _input_regs;
int _input_regs_length;
class NativeCallingConvention : public CallingConventionClosure {
GrowableArray<VMReg> _input_regs;
public:
NativeCallConv(const VMReg* input_regs, int input_regs_length) :
_input_regs(input_regs),
_input_regs_length(input_regs_length) {
}
NativeCallConv(const GrowableArray<VMReg>& input_regs)
: NativeCallConv(input_regs.data(), input_regs.length()) {}
NativeCallingConvention(const GrowableArray<VMReg>& input_regs)
: _input_regs(input_regs) {}

int calling_convention(BasicType* sig_bt, VMRegPair* out_regs, int num_args) const override;
};

class RegSpiller {
const VMReg* _regs;
int _num_regs;
GrowableArray<VMReg> _regs;
int _spill_size_bytes;
public:
RegSpiller(const VMReg* regs, int num_regs) :
_regs(regs), _num_regs(num_regs),
_spill_size_bytes(compute_spill_area(regs, num_regs)) {
}
RegSpiller(const GrowableArray<VMReg>& regs) : RegSpiller(regs.data(), regs.length()) {
RegSpiller(const GrowableArray<VMReg>& regs) : _regs(regs), _spill_size_bytes(compute_spill_area(regs)) {
}

int spill_size_bytes() const { return _spill_size_bytes; }
void generate_spill(MacroAssembler* masm, int rsp_offset) const { return generate(masm, rsp_offset, true); }
void generate_fill(MacroAssembler* masm, int rsp_offset) const { return generate(masm, rsp_offset, false); }

private:
static int compute_spill_area(const VMReg* regs, int num_regs);
static int compute_spill_area(const GrowableArray<VMReg>& regs);
void generate(MacroAssembler* masm, int rsp_offset, bool is_spill) const;

static int pd_reg_size(VMReg reg);
@@ -119,7 +109,7 @@ class ArgumentShuffle {
ArgumentShuffle(
BasicType* in_sig_bt, int num_in_args,
BasicType* out_sig_bt, int num_out_args,
const CallConvClosure* input_conv, const CallConvClosure* output_conv,
const CallingConventionClosure* input_conv, const CallingConventionClosure* output_conv,
VMReg shuffle_temp);

int out_arg_stack_slots() const { return _out_arg_stack_slots; }
2 changes: 1 addition & 1 deletion src/hotspot/share/prims/foreign_globals.inline.hpp
Original file line number Diff line number Diff line change
@@ -32,7 +32,7 @@
#include "oops/oopCast.inline.hpp"

template<typename T, typename Func>
void ForeignGlobals::loadArray(objArrayOop jarray, int type_index, GrowableArray<T>& array, Func converter) {
void ForeignGlobals::parse_register_array(objArrayOop jarray, int type_index, GrowableArray<T>& array, Func converter) {
objArrayOop subarray = oop_cast<objArrayOop>(jarray->obj_at(type_index));
int subarray_length = subarray->length();
for (int i = 0; i < subarray_length; i++) {
4 changes: 2 additions & 2 deletions src/hotspot/share/prims/nativeEntryPoint.cpp
Original file line number Diff line number Diff line change
@@ -57,8 +57,8 @@ JNI_ENTRY(jlong, NEP_makeInvoker(JNIEnv* env, jclass _unused, jobject method_typ
if (bt == BasicType::T_DOUBLE || bt == BasicType::T_LONG) {
basic_type[bt_idx++] = T_VOID;
// we only need these in the basic type
// NativeCallConv ignores them, but they are needed
// for JavaCallConv
// NativeCallingConvention ignores them, but they are needed
// for JavaCallingConvention
}
}

8 changes: 0 additions & 8 deletions src/hotspot/share/utilities/growableArray.hpp
Original file line number Diff line number Diff line change
@@ -141,14 +141,6 @@ class GrowableArrayView : public GrowableArrayBase {
return !(*this == rhs);
}

E* data() {
return _data;
}

const E* data() const {
return _data;
}

E& at(int i) {
assert(0 <= i && i < _len, "illegal index");
return _data[i];
Original file line number Diff line number Diff line change
@@ -310,8 +310,9 @@ static MethodHandle mergeArguments(MethodHandle mh, int sourceIndex, int destInd
}
MethodType newType = oldType.dropParameterTypes(destIndex, destIndex + 1);
int[] reorder = new int[oldType.parameterCount()];
if (destIndex < sourceIndex)
if (destIndex < sourceIndex) {
sourceIndex--;
}
for (int i = 0, index = 0; i < reorder.length; i++) {
if (i != destIndex) {
reorder[i] = index++;
Original file line number Diff line number Diff line change
@@ -166,7 +166,7 @@ public static ABIDescriptor abiFor(VMStorage[] inputIntRegs,
},
stackAlignment,
shadowSpace,
targetAddrStorage, retBufAddrStorage);
targetAddrStorage, retBufAddrStorage);
}

}
Original file line number Diff line number Diff line change
@@ -153,7 +153,7 @@ public static ABIDescriptor abiFor(VMStorage[] inputIntRegs, VMStorage[] inputVe
},
stackAlignment,
shadowSpace,
targetAddrStorage, retBufAddrStorage);
targetAddrStorage, retBufAddrStorage);
}

}