LLVM Discussion Forums

JIT execution with thread_local global variable

Hi guys,

I’m new to LLVM.
After spending a few hours on the internet, I found pretty much nothing about how to allocate a global variable that is thread_local.

What I want to do is fairly straight forward, I would like to have a thread_local global variable for a function to access and simply return it.

However, I have no luck in implementing the idea.

Does anyone know where to find good tutorial of similar things?

Thanks

Can you express what you want to do in C or C++? If so then you can use clang to check the LLVM IR that corresponds to it with clang -S -emit-llvm

Hi

Thanks for replying me.

Here is my code

InitializeNativeTarget();
InitializeNativeTargetAsmPrinter();

Constant* init_value = ConstantInt::get(Type::getInt32Ty(context), APInt(32,12));

GlobalVariable* bsp = new GlobalVariable(*module, Type::getInt32Ty(context), false, GlobalValue::ExternalLinkage, init_value, "bsp", 0/*, GlobalValue::GeneralDynamicTLSModel*/);
llvm::FunctionType* funcType = llvm::FunctionType::get(Type::getInt32Ty(context), {}, false);
llvm::Function* mainFunc = llvm::Function::Create(funcType, llvm::Function::ExternalLinkage, "main", module.get());

llvm::BasicBlock* bb = llvm::BasicBlock::Create(context, "entrypoint", mainFunc);
IRBuilder<> builder(bb);
builder.CreateRet(builder.CreateLoad(bsp));

auto TheExecutionEngine = std::unique_ptr<ExecutionEngine>(llvm::EngineBuilder(std::move(module)).create());
const auto shader = (int(*)())TheExecutionEngine->getFunctionAddress("main");

printf("%d\n", shader());

Below is the IR code generated by LLVM 10.0.0

; ModuleID = 'my cool jit'
source_filename = "my cool jit"

@bsp = global i32 12

define i32 @main() {
entrypoint:
  %0 = load i32, i32* @bsp, align 4
  ret i32 %0
}

If the ‘GeneralDynamicTLSModel’ part is uncommented, I got a crash in the last line. I have searched for the whole day, no luck finding any resources guiding how to allocate a thread-local global variable.

Please give me some suggestions if there is something that I missed.

Nothing strikes me as incorrect here, what is the crash? At this point I’d build in debug more and attach a debugger, look at the stack trace to understand where is it crashing.

I’ve doing it for the whole day, but as a new comer, this LLVM implementation is pretty overwhelming to me.

I happen to notice this code in executionengine.cpp

void ExecutionEngine::emitGlobalVariable(const GlobalVariable *GV) {
  void *GA = getPointerToGlobalIfAvailable(GV);

  if (!GA) {
    // If it's not already specified, allocate memory for the global.
    GA = getMemoryForGV(GV);

    // If we failed to allocate memory for this global, return.
    if (!GA) return;

    addGlobalMapping(GV, GA);
  }

  // Don't initialize if it's thread local, let the client do it.
  if (!GV->isThreadLocal())
    InitializeMemory(GV->getInitializer(), GA);

  Type *ElTy = GV->getValueType();
  size_t GVSize = (size_t)getDataLayout().getTypeAllocSize(ElTy);
  NumInitBytes += (unsigned)GVSize;
  ++NumGlobals;
}

It is pretty clear that a thread_local global variable doesn’t get initialized. This makes sense to me because depending on the thread context, this may have a different initializer.

Do you have any exampling demonstrating how to initialize global variable like it says, ‘let the client do it’?

There is no clear signal from the call stack.

It crashes inside the function call ( shader() ), without anything deeper inside.

Is there any way to allow llvm to emit more warnings?

OK I thought that you were seeing a crash during compilation, not during execution.
I assume your LLVM is already built in debug mode (or with assertions enabled?), I would expect an error message instead of a crash.

I know that handling thread local in the JIT engine is a bit more tricky, but I’m not sure about the details, someone more knowledgeable will have to chime in here.

I have some progress on this.

It turns out the same code works well on Ubuntu, but it does crash on Windows and MacOS. The bias behaviour across multiple system leads to my concern of how much I can trust this feature implementation inside LLVM, I guess I may have to figure out some workaround to tackle this problem.

I still won’t go that far to say LLVM is buggy, but the limited document and resources on this topic are blocking my progress, for which reason I will have to find alternative workaround.

Oh I didn’t know you were on Windows! The JIT support there is fairly recent I believe, and I not sure TLS is expected to work (It’d be nice to have a better error message than a crash though).

Thread local support in the JIT is pretty limited in general, and TLS on Windows hasn’t had any effort put in to it to my knowledge.

Have you tried checking ExecutionEngine::hasError and ExecutionEngine::getErrorMessage to see if they return anything useful?

You might also consider switching from ExecutionEngine to LLJIT (e.g. https://github.com/llvm/llvm-project/blob/master/llvm/examples/HowToUseLLJIT/HowToUseLLJIT.cpp): It does a better job at reporting errors.

Thanks, guys.

There is no error message apart from a crash stack with all assembly code.
I guess I just have to figure out a workaround then.

Even if it is not working, but at least I know by working around the problem, I’m not doing the wrong thing.