-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CIR][ABI][AArch64][Lowering] support for calling struct types in range (64, 128) #1141
base: main
Are you sure you want to change the base?
Conversation
Support expressions at the top level such as const unsigned int n = 1234; const int &r = (const int&)n; Reviewers: bcardosolopes Pull Request: llvm#857
This is to match clang CodeGen
As title. Also introduced buildAArch64NeonCall skeleton, which is partially the counterpart of OG's EmitNeonCall. And this could be use for many other neon intrinsics. --------- Co-authored-by: Guojin He <[email protected]>
These were uninitialized, which led to intermittent test failures from the use of uninitialized variables. Initialize them to `nullptr` as is done with other member variables that are pointers to fix this. I did a quick spot-check and didn't find other uninitialized variables in the main CGF class itself. Lots of subclasses have uninitialized member variables, but those are presumably expected to be initialized at all points of construction, so we can leave them alone until they cause any issues. `ninja check-clang-cir` now passes with ASan+UBSan and MSan. Fixes llvm#829
See the test for example.
This PR adds aarch64 big endian support. Basically the support for aarch64_be itself is expressed only in two extra cases for the switch statement and changes in the `CIRDataLayout` are needed to prove that we really support big endian. Hence the idea for the test - I think the best way for proof is something connected with bit-fields, so we compare the results of the original codegen and ours.
This PR splits the old `cir-simplify` pass into two new passes, namely `cir-canonicalize` and `cir-simplify` (the new `cir-simplify`). The `cir-canonicalize` pass runs transformations that do not affect CIR-to-source fidelity much, such as operation folding and redundant operation elimination. On the other hand, the new `cir-simplify` pass runs transformations that may significantly change the code and break high-level code analysis passes, such as more aggresive code optimizations. This PR also updates the CIR-to-CIR pipeline to fit these two new passes. The `cir-canonicalize` pass is moved to the very front of the pipeline, while the new `cir-simplify` pass is moved to the back of the pipeline (but still before lowering prepare of course). Additionally, the new `cir-simplify` now only runs when the user specifies a non-zero optimization level on the frontend. Also fixed some typos and resolved some `clang-tidy` complaints along the way. Resolves llvm#827 .
Currently the C style cast is not implemented/supported for unions. This PR adds support for union casts as done in `CGExprAgg.cpp`. I have also added an extra test in `union-init.c`.
Mistakenly closed llvm#850 llvm#850 (review) This PR fixes array initialization for expression arguments. Consider the following code snippet `test.c`: ``` typedef struct { int a; int b[2]; } A; int bar() { return 42; } void foo() { A a = {bar(), {}}; } ``` When ran with `bin/clang test.c -Xclang -fclangir -Xclang -emit-cir -S -o -`, It produces the following error: ``` ~/clangir/clang/lib/CIR/CodeGen/CIRGenExprAgg.cpp:483: void {anonymous}::AggExprEmitter::buildArrayInit(cir::Address, mlir::cir::ArrayType, clang::QualType, clang::Expr*, llvm::ArrayRef<clang::Expr*>, clang::Expr*): Assertion `NumInitElements != 0' failed. ``` The error can be traced back to `CIRGenExprAgg.cpp`, and the fix is simple. It is possible to have an empty array initialization as an expression argument!
As title, if element type of vector type is sized, then the vector type should be deemed sized. This would enable us generate code for neon without triggering assertion
…eon_vrndaq_v (llvm#871) as title. This also added NeonType support for Float32 Co-authored-by: Guojin He <[email protected]>
…::saved_type::save
It will hit another assert when calling initFullExprCleanup.
This PR fixes the case, when a temporary var is used, and `alloca` operation is inserted in the block start before the `label` operation. Implementation: when we search for the `alloca` place in a block, we take label operations into account as well. Fix llvm#870 --------- Co-authored-by: Bruno Cardoso Lopes <[email protected]>
__attribute__((annotate()) was only accepting integer literals, preventing some meta-programming usage for example. This should be extended to some other kinds of types. --------- Co-authored-by: Bruno Cardoso Lopes <[email protected]>
Just as the title says, but only covers non-exception path, that's coming next.
Nothing unblocked yet, just hit next assert in the same path.
… exceptions Code path still hits an assert sooner, incremental NFC step.
…lvm#878) Close llvm#876 We've already considered the case that there are random stmt after a switch case: ``` for (auto *c : compoundStmt->body()) { if (auto *switchCase = dyn_cast<SwitchCase>(c)) { res = buildSwitchCase(*switchCase, condType, caseAttrs); } else if (lastCaseBlock) { // This means it's a random stmt following up a case, just // emit it as part of previous known case. mlir::OpBuilder::InsertionGuard guardCase(builder); builder.setInsertionPointToEnd(lastCaseBlock); res = buildStmt(c, /*useCurrentScope=*/!isa<CompoundStmt>(c)); } else { llvm_unreachable("statement doesn't belong to any case region, NYI"); } lastCaseBlock = builder.getBlock(); if (res.failed()) break; } ``` However, maybe this is an oversight, in the branch of ` if (lastCaseBlock)`, the insertion point will be updated automatically when the RAII object `guardCase` destroys, then we can assign the correct value for `lastCaseBlock` later. So we will see the weird code pattern in the issue side. BTW, I found the codes in CIRGenStmt.cpp are far more less similar with the ones other code gen places. Is this intentional? And what is the motivation and guide lines here?
…lvm#882) As title. Notice that for those intrinsics, just like OG, we do not lower to llvm intrinsics, instead, do vector insert. The test case is partially from OG [aarch64-neon-vget.c](https://github.com/llvm/clangir/blob/85bc6407f559221afebe08a60ed2b50bf1edf7fa/clang/test/CodeGen/aarch64-neon-vget.c) But, I did not do all signed and unsigned int tests because unsigned and signed of the same width essentially just use the same intrinsic ID thus exactly same code path as far as this PR concerns. --------- Co-authored-by: Guojin He <[email protected]>
Reviewers: bcardosolopes Reviewed By: bcardosolopes Pull Request: llvm#881
…#877) We need a target-independent way to distinguish OpenCL kernels in ClangIR. This PR adds a unit attribute `OpenCLKernelAttr` similar to the one in Clang AST. This attribute is attached to the extra attribute dictionary of `cir.func` operations only. (Not for `cir.call`.)
…'case' (llvm#879) Motivation example: ``` extern "C" void action1(); extern "C" void action2(); extern "C" void case_follow_label(int v) { switch (v) { case 1: label: case 2: action1(); break; default: action2(); goto label; } } ``` When we compile it, we will meet: ``` case Stmt::CaseStmtClass: case Stmt::DefaultStmtClass: assert(0 && "Should not get here, currently handled directly from SwitchStmt"); break; ``` in `buildStmt`. The cause is clear. We call `buildStmt` when we build the label stmt. To solve this, I think we should be able to build case stmt in buildStmt. But the new problem is, we need to pass the information like caseAttr and condType. So I tried to add such informations in CIRGenFunction as data member.
This is going to be raised in follow up work, which is hard to do in one go because createBaseClassAddr goes of the OG skeleton and ideally we want ApplyNonVirtualAndVirtualOffset to work naturally. This also doesn't handle null checks, coming next.
Now that we fixed the dep on VBase, clean up the rest of the function.
…e BaseClassAddrOp
It was always the intention for `cir.cmp` operations to return bool result. Due to missing constraints, a bug in codegen has slipped in which created `cir.cmp` operations with result type that matches the original AST expression type. In C, as opposed to C++, boolean expression types are "int". This resulted with extra operations being codegened around boolean expressions and their usage. This commit both enforces `cir.cmp` in the op definition and fixes the mentioned bug.
…vm#1135) support `llvm.intr.memset.inline` in llvm-project repo before we add support for `__builtin_memset_inline` in clangir cc @bcardosolopes (cherry picked from commit 30753af)
This is the first patch to support TBAA, following the discussion at llvm#1076 (comment) - add skeleton for CIRGen, utilizing `decorateOperationWithTBAA` - add empty implementation in `CIRGenTBAA` - introduce `CIR_TBAAAttr` with empty body - attach `CIR_TBAAAttr` to `LoadOp` and `StoreOp` - no handling of vtable pointer - no LLVM lowering
) The title describes the purpose of the PR. It adds initial support for structures with padding to the call convention lowering for AArch64. I have also _initial support_ for the missing feature [FinishLayout](https://github.com/llvm/clangir/blob/5c5d58402bebdb1e851fb055f746662d4e7eb586/clang/lib/AST/RecordLayoutBuilder.cpp#L786) for records, and the logic is gotten from the original codegen. Finally, I added a test for verification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is basically the same as #1059 except for argument passing instead of return values, right?
This diff looks fine to me (though I'll give others some time to chime in). For future work I'm wondering if we can unify some of the argument and return paths, but I haven't looked into it very much (and it's not something you need to worry about for this diff).
Yes, you are correct. Actually, in the original CodeGen they are both done in CreateCoercedLoad, and ideally they should be together. |
4aca8d4
to
a04cf10
Compare
This PR adds support for the lowering of AArch64 calls with structs having sizes greater than 64 and less than 128.
The idea is from the original CodeGen, where we perform a coercion through memory for these type of calls.
I have added a test for this.