blob: 6039a03aa29deb31bff220264d390803df1bb8ab [file] [log] [blame]
//! This defines the syntax of MIR, i.e., the set of available MIR operations, and other definitions
//! closely related to MIR semantics.
//! This is in a dedicated file so that changes to this file can be reviewed more carefully.
//! The intention is that this file only contains datatype declarations, no code.
use rustc_abi::{FieldIdx, VariantIdx};
use rustc_ast::{InlineAsmOptions, InlineAsmTemplatePiece, Mutability};
use rustc_data_structures::packed::Pu128;
use rustc_hir::CoroutineKind;
use rustc_hir::def_id::DefId;
use rustc_index::IndexVec;
use rustc_macros::{HashStable, TyDecodable, TyEncodable, TypeFoldable, TypeVisitable};
use rustc_span::def_id::LocalDefId;
use rustc_span::source_map::Spanned;
use rustc_span::{Span, Symbol};
use rustc_target::asm::InlineAsmRegOrRegClass;
use smallvec::SmallVec;
use super::{BasicBlock, Const, Local, UserTypeProjection};
use crate::mir::coverage::CoverageKind;
use crate::ty::adjustment::PointerCoercion;
use crate::ty::{self, GenericArgsRef, List, Region, Ty, UserTypeAnnotationIndex};
/// Represents the "flavors" of MIR.
///
/// The MIR pipeline is structured into a few major dialects, with one or more phases within each
/// dialect. A MIR flavor is identified by a dialect-phase pair. A single `MirPhase` value
/// specifies such a pair. All flavors of MIR use the same data structure to represent the program.
///
/// Different MIR dialects have different semantics. (The differences between dialects are small,
/// but they do exist.) The progression from one MIR dialect to the next is technically a lowering
/// from one IR to another. In other words, a single well-formed [`Body`](crate::mir::Body) might
/// have different semantic meaning and different behavior at runtime in the different dialects.
/// The specific differences between dialects are described on the variants below.
///
/// Phases exist only to place restrictions on what language constructs are permitted in
/// well-formed MIR, and subsequent phases mostly increase those restrictions. I.e. to convert MIR
/// from one phase to the next might require removing/replacing certain MIR constructs.
///
/// When adding dialects or phases, remember to update [`MirPhase::index`].
#[derive(Copy, Clone, TyEncodable, TyDecodable, Debug, PartialEq, Eq, PartialOrd, Ord)]
#[derive(HashStable)]
pub enum MirPhase {
/// The "built MIR" dialect, as generated by MIR building.
///
/// The only things that operate on this dialect are unsafeck, the various MIR lints, and const
/// qualifs.
///
/// This dialect has just the one (implicit) phase, which places few restrictions on what MIR
/// constructs are allowed.
Built,
/// The "analysis MIR" dialect, used for borrowck and friends.
///
/// The only semantic difference between built MIR and analysis MIR relates to constant
/// promotion. In built MIR, sequences of statements that would generally be subject to
/// constant promotion are semantically constants, while in analysis MIR all constants are
/// explicit.
///
/// The result of const promotion is available from the `mir_promoted` and `promoted_mir`
/// queries.
///
/// The phases of this dialect are described in `AnalysisPhase`.
Analysis(AnalysisPhase),
/// The "runtime MIR" dialect, used for CTFE, optimizations, and codegen.
///
/// The semantic differences between analysis MIR and runtime MIR are as follows.
///
/// - Drops: In analysis MIR, `Drop` terminators represent *conditional* drops; roughly
/// speaking, if dataflow analysis determines that the place being dropped is uninitialized,
/// the drop will not be executed. The exact semantics of this aren't written down anywhere,
/// which means they are essentially "what drop elaboration does." In runtime MIR, the drops
/// are unconditional; when a `Drop` terminator is reached, if the type has drop glue that
/// drop glue is always executed. This may be UB if the underlying place is not initialized.
/// - Packed drops: Places might in general be misaligned - in most cases this is UB, the
/// exception is fields of packed structs. In analysis MIR, `Drop(P)` for a `P` that might be
/// misaligned for this reason implicitly moves `P` to a temporary before dropping. Runtime
/// MIR has no such rules, and dropping a misaligned place is simply UB.
/// - Async drops: after drop elaboration some drops may become async (`drop`, `async_fut` fields).
/// StateTransform pass will expand those async drops or reset to sync.
/// - Unwinding: in analysis MIR, unwinding from a function which may not unwind aborts. In
/// runtime MIR, this is UB.
/// - Retags: If `-Zmir-emit-retag` is enabled, analysis MIR has "implicit" retags in the same
/// way that Rust itself has them. Where exactly these are is generally subject to change,
/// and so we don't document this here. Runtime MIR has most retags explicit (though implicit
/// retags can still occur at `Rvalue::{Ref,AddrOf}`).
/// - Coroutine bodies: In analysis MIR, locals may actually be behind a pointer that user code
/// has access to. This occurs in coroutine bodies. Such locals do not behave like other
/// locals, because they e.g. may be aliased in surprising ways. Runtime MIR has no such
/// special locals. All coroutine bodies are lowered and so all places that look like locals
/// really are locals.
///
/// Also note that the lint pass which reports eg `200_u8 + 200_u8` as an error is run as a part
/// of analysis to runtime MIR lowering. To ensure lints are reported reliably, this means that
/// transformations that can suppress such errors should not run on analysis MIR.
///
/// The phases of this dialect are described in `RuntimePhase`.
Runtime(RuntimePhase),
}
/// See [`MirPhase::Analysis`].
#[derive(Copy, Clone, TyEncodable, TyDecodable, Debug, PartialEq, Eq, PartialOrd, Ord)]
#[derive(HashStable)]
pub enum AnalysisPhase {
Initial = 0,
/// Beginning in this phase, the following variants are disallowed:
/// * [`TerminatorKind::FalseUnwind`]
/// * [`TerminatorKind::FalseEdge`]
/// * [`StatementKind::FakeRead`]
/// * [`StatementKind::AscribeUserType`]
/// * [`StatementKind::Coverage`] with [`CoverageKind::BlockMarker`] or
/// [`CoverageKind::SpanMarker`]
/// * [`Rvalue::Ref`] with `BorrowKind::Fake`
/// * [`CastKind::PointerCoercion`] with any of the following:
/// * [`PointerCoercion::ArrayToPointer`]
/// * [`PointerCoercion::MutToConstPointer`]
///
/// Furthermore, `Deref` projections must be the first projection within any place (if they
/// appear at all)
PostCleanup = 1,
}
/// See [`MirPhase::Runtime`].
#[derive(Copy, Clone, TyEncodable, TyDecodable, Debug, PartialEq, Eq, PartialOrd, Ord)]
#[derive(HashStable)]
pub enum RuntimePhase {
/// In addition to the semantic changes, beginning with this phase, the following variants are
/// disallowed:
/// * [`TerminatorKind::Yield`]
/// * [`TerminatorKind::CoroutineDrop`]
/// * [`Rvalue::Aggregate`] for any `AggregateKind` except `Array`
/// * [`PlaceElem::OpaqueCast`]
///
/// And the following variants are allowed:
/// * [`StatementKind::Retag`]
/// * [`StatementKind::SetDiscriminant`]
/// * [`StatementKind::Deinit`]
///
/// Furthermore, `Copy` operands are allowed for non-`Copy` types.
Initial = 0,
/// Beginning with this phase, the following variant is disallowed:
/// * [`ProjectionElem::Deref`] of `Box`
PostCleanup = 1,
Optimized = 2,
}
///////////////////////////////////////////////////////////////////////////
// Borrow kinds
#[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord, TyEncodable, TyDecodable)]
#[derive(Hash, HashStable)]
pub enum BorrowKind {
/// Data must be immutable and is aliasable.
Shared,
/// An immutable, aliasable borrow that is discarded after borrow-checking. Can behave either
/// like a normal shared borrow or like a special shallow borrow (see [`FakeBorrowKind`]).
///
/// This is used when lowering index expressions and matches. This is used to prevent code like
/// the following from compiling:
/// ```compile_fail,E0510
/// let mut x: &[_] = &[[0, 1]];
/// let y: &[_] = &[];
/// let _ = x[0][{x = y; 1}];
/// ```
/// ```compile_fail,E0510
/// let mut x = &Some(0);
/// match *x {
/// None => (),
/// Some(_) if { x = &None; false } => (),
/// Some(_) => (),
/// }
/// ```
/// We can also report errors with this kind of borrow differently.
Fake(FakeBorrowKind),
/// Data is mutable and not aliasable.
Mut { kind: MutBorrowKind },
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord, TyEncodable, TyDecodable)]
#[derive(Hash, HashStable)]
pub enum RawPtrKind {
Mut,
Const,
/// Creates a raw pointer to a place that will only be used to access its metadata,
/// not the data behind the pointer. Note that this limitation is *not* enforced
/// by the validator.
///
/// The borrow checker allows overlap of these raw pointers with references to the
/// data. This is sound even if the pointer is "misused" since any such use is anyway
/// unsafe. In terms of the operational semantics (i.e., Miri), this is equivalent
/// to `RawPtrKind::Mut`, but will never incur a retag.
FakeForPtrMetadata,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord, TyEncodable, TyDecodable)]
#[derive(Hash, HashStable)]
pub enum MutBorrowKind {
Default,
/// This borrow arose from method-call auto-ref. (i.e., `adjustment::Adjust::Borrow`)
TwoPhaseBorrow,
/// Data must be immutable but not aliasable. This kind of borrow
/// cannot currently be expressed by the user and is used only in
/// implicit closure bindings. It is needed when the closure is
/// borrowing or mutating a mutable referent, e.g.:
/// ```
/// let mut z = 3;
/// let x: &mut isize = &mut z;
/// let y = || *x += 5;
/// ```
/// If we were to try to translate this closure into a more explicit
/// form, we'd encounter an error with the code as written:
/// ```compile_fail,E0594
/// struct Env<'a> { x: &'a &'a mut isize }
/// let mut z = 3;
/// let x: &mut isize = &mut z;
/// let y = (&mut Env { x: &x }, fn_ptr); // Closure is pair of env and fn
/// fn fn_ptr(env: &mut Env) { **env.x += 5; }
/// ```
/// This is then illegal because you cannot mutate an `&mut` found
/// in an aliasable location. To solve, you'd have to translate with
/// an `&mut` borrow:
/// ```compile_fail,E0596
/// struct Env<'a> { x: &'a mut &'a mut isize }
/// let mut z = 3;
/// let x: &mut isize = &mut z;
/// let y = (&mut Env { x: &mut x }, fn_ptr); // changed from &x to &mut x
/// fn fn_ptr(env: &mut Env) { **env.x += 5; }
/// ```
/// Now the assignment to `**env.x` is legal, but creating a
/// mutable pointer to `x` is not because `x` is not mutable. We
/// could fix this by declaring `x` as `let mut x`. This is ok in
/// user code, if awkward, but extra weird for closures, since the
/// borrow is hidden.
///
/// So we introduce a `ClosureCapture` borrow -- user will not have to mark the variable
/// containing the mutable reference as `mut`, as they didn't ever
/// intend to mutate the mutable reference itself. We still mutable capture it in order to
/// mutate the pointed value through it (but not mutating the reference itself).
///
/// This solves the problem. For simplicity, we don't give users the way to express this
/// borrow, it's just used when translating closures.
ClosureCapture,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, PartialOrd, Ord, TyEncodable, TyDecodable)]
#[derive(Hash, HashStable)]
pub enum FakeBorrowKind {
/// A shared shallow borrow. The immediately borrowed place must be immutable, but projections
/// from it don't need to be. For example, a shallow borrow of `a.b` doesn't conflict with a
/// mutable borrow of `a.b.c`.
///
/// This is used when lowering matches: when matching on a place we want to ensure that place
/// have the same value from the start of the match until an arm is selected. This prevents this
/// code from compiling:
/// ```compile_fail,E0510
/// let mut x = &Some(0);
/// match *x {
/// None => (),
/// Some(_) if { x = &None; false } => (),
/// Some(_) => (),
/// }
/// ```
/// This can't be a shared borrow because mutably borrowing `(*x as Some).0` should not checking
/// the discriminant or accessing other variants, because the mutating `(*x as Some).0` can't
/// affect the discriminant of `x`. E.g. the following is allowed:
/// ```rust
/// let mut x = Some(0);
/// match x {
/// Some(_)
/// if {
/// if let Some(ref mut y) = x {
/// *y += 1;
/// };
/// true
/// } => {}
/// _ => {}
/// }
/// ```
Shallow,
/// A shared (deep) borrow. Data must be immutable and is aliasable.
///
/// This is used when lowering deref patterns, where shallow borrows wouldn't prevent something
/// like:
/// ```compile_fail
/// let mut b = Box::new(false);
/// match b {
/// deref!(true) => {} // not reached because `*b == false`
/// _ if { *b = true; false } => {} // not reached because the guard is `false`
/// deref!(false) => {} // not reached because the guard changed it
/// // UB because we reached the unreachable.
/// }
/// ```
Deep,
}
///////////////////////////////////////////////////////////////////////////
// Statements
/// The various kinds of statements that can appear in MIR.
///
/// Not all of these are allowed at every [`MirPhase`]. Check the documentation there to see which
/// ones you do not have to worry about. The MIR validator will generally enforce such restrictions,
/// causing an ICE if they are violated.
#[derive(Clone, Debug, PartialEq, TyEncodable, TyDecodable, Hash, HashStable)]
#[derive(TypeFoldable, TypeVisitable)]
pub enum StatementKind<'tcx> {
/// Assign statements roughly correspond to an assignment in Rust proper (`x = ...`) except
/// without the possibility of dropping the previous value (that must be done separately, if at
/// all). The *exact* way this works is undecided. It probably does something like evaluating
/// the LHS to a place and the RHS to a value, and then storing the value to the place. Various
/// parts of this may do type specific things that are more complicated than simply copying
/// bytes.
///
/// **Needs clarification**: The implication of the above idea would be that assignment implies
/// that the resulting value is initialized. I believe we could commit to this separately from
/// committing to whatever part of the memory model we would need to decide on to make the above
/// paragraph precise. Do we want to?
///
/// Assignments in which the types of the place and rvalue differ are not well-formed.
///
/// **Needs clarification**: Do we ever want to worry about non-free (in the body) lifetimes for
/// the typing requirement in post drop-elaboration MIR? I think probably not - I'm not sure we
/// could meaningfully require this anyway. How about free lifetimes? Is ignoring this
/// interesting for optimizations? Do we want to allow such optimizations?
///
/// **Needs clarification**: We currently require that the LHS place not overlap with any place
/// read as part of computation of the RHS for some rvalues (generally those not producing
/// primitives). This requirement is under discussion in [#68364]. As a part of this discussion,
/// it is also unclear in what order the components are evaluated.
///
/// [#68364]: https://github.com/rust-lang/rust/issues/68364
///
/// See [`Rvalue`] documentation for details on each of those.
Assign(Box<(Place<'tcx>, Rvalue<'tcx>)>),
/// When executed at runtime, this is a nop.
///
/// During static analysis, a fake read:
/// - requires that the value being read is initialized (or, in the case
/// of closures, that it was fully initialized at some point in the past)
/// - constitutes a use of a value for the purposes of NLL (i.e. if the
/// value being fake-read is a reference, the lifetime of that reference
/// will be extended to cover the `FakeRead`)
/// - but, unlike an actual read, does *not* invalidate any exclusive
/// borrows.
///
/// See [`FakeReadCause`] for more details on the situations in which a
/// `FakeRead` is emitted.
///
/// Disallowed after drop elaboration.
FakeRead(Box<(FakeReadCause, Place<'tcx>)>),
/// Write the discriminant for a variant to the enum Place.
///
/// This is permitted for both coroutines and ADTs. This does not necessarily write to the
/// entire place; instead, it writes to the minimum set of bytes as required by the layout for
/// the type.
SetDiscriminant { place: Box<Place<'tcx>>, variant_index: VariantIdx },
/// Deinitializes the place.
///
/// This writes `uninit` bytes to the entire place.
Deinit(Box<Place<'tcx>>),
/// `StorageLive` and `StorageDead` statements mark the live range of a local.
///
/// At any point during the execution of a function, each local is either allocated or
/// unallocated. Except as noted below, all locals except function parameters are initially
/// unallocated. `StorageLive` statements cause memory to be allocated for the local while
/// `StorageDead` statements cause the memory to be freed. In other words,
/// `StorageLive`/`StorageDead` act like the heap operations `allocate`/`deallocate`, but for
/// stack-allocated local variables. Using a local in any way (not only reading/writing from it)
/// while it is unallocated is UB.
///
/// Some locals have no `StorageLive` or `StorageDead` statements within the entire MIR body.
/// These locals are implicitly allocated for the full duration of the function. There is a
/// convenience method at `rustc_mir_dataflow::storage::always_storage_live_locals` for
/// computing these locals.
///
/// If the local is already allocated, calling `StorageLive` again will implicitly free the
/// local and then allocate fresh uninitialized memory. If a local is already deallocated,
/// calling `StorageDead` again is a NOP.
StorageLive(Local),
/// See `StorageLive` above.
StorageDead(Local),
/// Retag references in the given place, ensuring they got fresh tags.
///
/// This is part of the Stacked Borrows model. These statements are currently only interpreted
/// by miri and only generated when `-Z mir-emit-retag` is passed. See
/// <https://internals.rust-lang.org/t/stacked-borrows-an-aliasing-model-for-rust/8153/> for
/// more details.
///
/// For code that is not specific to stacked borrows, you should consider retags to read and
/// modify the place in an opaque way.
///
/// Only `RetagKind::Default` and `RetagKind::FnEntry` are permitted.
Retag(RetagKind, Box<Place<'tcx>>),
/// This statement exists to preserve a trace of a scrutinee matched against a wildcard binding.
/// This is especially useful for `let _ = PLACE;` bindings that desugar to a single
/// `PlaceMention(PLACE)`.
///
/// When executed at runtime, this computes the given place, but then discards
/// it without doing a load. `let _ = *ptr;` is fine even if the pointer is dangling.
PlaceMention(Box<Place<'tcx>>),
/// Encodes a user's type ascription. These need to be preserved
/// intact so that NLL can respect them. For example:
/// ```ignore (illustrative)
/// let a: T = y;
/// ```
/// The effect of this annotation is to relate the type `T_y` of the place `y`
/// to the user-given type `T`. The effect depends on the specified variance:
///
/// - `Covariant` -- requires that `T_y <: T`
/// - `Contravariant` -- requires that `T_y :> T`
/// - `Invariant` -- requires that `T_y == T`
/// - `Bivariant` -- no effect
///
/// When executed at runtime this is a nop.
///
/// Disallowed after drop elaboration.
AscribeUserType(Box<(Place<'tcx>, UserTypeProjection)>, ty::Variance),
/// Carries control-flow-sensitive information injected by `-Cinstrument-coverage`,
/// such as where to generate physical coverage-counter-increments during codegen.
///
/// Coverage statements are used in conjunction with the coverage mappings and other
/// information stored in the function's
/// [`mir::Body::function_coverage_info`](crate::mir::Body::function_coverage_info).
/// (For inlined MIR, take care to look up the *original function's* coverage info.)
///
/// Interpreters and codegen backends that don't support coverage instrumentation
/// can usually treat this as a no-op.
Coverage(
// Coverage statements are unlikely to ever contain type information in
// the foreseeable future, so excluding them from TypeFoldable/TypeVisitable
// avoids some unhelpful derive boilerplate.
#[type_foldable(identity)]
#[type_visitable(ignore)]
CoverageKind,
),
/// Denotes a call to an intrinsic that does not require an unwind path and always returns.
/// This avoids adding a new block and a terminator for simple intrinsics.
Intrinsic(Box<NonDivergingIntrinsic<'tcx>>),
/// Instructs the const eval interpreter to increment a counter; this counter is used to track
/// how many steps the interpreter has taken. It is used to prevent the user from writing const
/// code that runs for too long or infinitely. Other than in the const eval interpreter, this
/// is a no-op.
ConstEvalCounter,
/// No-op. Useful for deleting instructions without affecting statement indices.
Nop,
/// Marker statement indicating where `place` would be dropped.
/// This is semantically equivalent to `Nop`, so codegen and MIRI should interpret this
/// statement as such.
/// The only use case of this statement is for linting in MIR to detect temporary lifetime
/// changes.
BackwardIncompatibleDropHint {
/// Place to drop
place: Box<Place<'tcx>>,
/// Reason for backward incompatibility
reason: BackwardIncompatibleDropReason,
},
}
#[derive(
Clone,
TyEncodable,
TyDecodable,
Debug,
PartialEq,
Hash,
HashStable,
TypeFoldable,
TypeVisitable
)]
pub enum NonDivergingIntrinsic<'tcx> {
/// Denotes a call to the intrinsic function `assume`.
///
/// The operand must be a boolean. Optimizers may use the value of the boolean to backtrack its
/// computation to infer information about other variables. So if the boolean came from a
/// `x < y` operation, subsequent operations on `x` and `y` could elide various bound checks.
/// If the argument is `false`, this operation is equivalent to `TerminatorKind::Unreachable`.
Assume(Operand<'tcx>),
/// Denotes a call to the intrinsic function `copy_nonoverlapping`.
///
/// First, all three operands are evaluated. `src` and `dest` must each be a reference, pointer,
/// or `Box` pointing to the same type `T`. `count` must evaluate to a `usize`. Then, `src` and
/// `dest` are dereferenced, and `count * size_of::<T>()` bytes beginning with the first byte of
/// the `src` place are copied to the contiguous range of bytes beginning with the first byte
/// of `dest`.
///
/// **Needs clarification**: In what order are operands computed and dereferenced? It should
/// probably match the order for assignment, but that is also undecided.
///
/// **Needs clarification**: Is this typed or not, ie is there a typed load and store involved?
/// I vaguely remember Ralf saying somewhere that he thought it should not be.
CopyNonOverlapping(CopyNonOverlapping<'tcx>),
}
/// Describes what kind of retag is to be performed.
#[derive(Copy, Clone, TyEncodable, TyDecodable, Debug, PartialEq, Eq, Hash, HashStable)]
#[rustc_pass_by_value]
pub enum RetagKind {
/// The initial retag of arguments when entering a function.
FnEntry,
/// Retag preparing for a two-phase borrow.
TwoPhase,
/// Retagging raw pointers.
Raw,
/// A "normal" retag.
Default,
}
/// The `FakeReadCause` describes the type of pattern why a FakeRead statement exists.
#[derive(Copy, Clone, TyEncodable, TyDecodable, Debug, Hash, HashStable, PartialEq)]
pub enum FakeReadCause {
/// A fake read injected into a match guard to ensure that the discriminants
/// that are being matched on aren't modified while the match guard is being
/// evaluated.
///
/// At the beginning of each match guard, a [fake borrow][FakeBorrowKind] is
/// inserted for each discriminant accessed in the entire `match` statement.
///
/// Then, at the end of the match guard, a `FakeRead(ForMatchGuard)` is
/// inserted to keep the fake borrows alive until that point.
///
/// This should ensure that you cannot change the variant for an enum while
/// you are in the midst of matching on it.
ForMatchGuard,
/// Fake read of the scrutinee of a `match` or destructuring `let`
/// (i.e. `let` with non-trivial pattern).
///
/// In `match x { ... }`, we generate a `FakeRead(ForMatchedPlace, x)`
/// and insert it into the `otherwise_block` (which is supposed to be
/// unreachable for irrefutable pattern-matches like `match` or `let`).
///
/// This is necessary because `let x: !; match x {}` doesn't generate any
/// actual read of x, so we need to generate a `FakeRead` to check that it
/// is initialized.
///
/// If the `FakeRead(ForMatchedPlace)` is being performed with a closure
/// that doesn't capture the required upvars, the `FakeRead` within the
/// closure is omitted entirely.
///
/// To make sure that this is still sound, if a closure matches against
/// a Place starting with an Upvar, we hoist the `FakeRead` to the
/// definition point of the closure.
///
/// If the `FakeRead` comes from being hoisted out of a closure like this,
/// we record the `LocalDefId` of the closure. Otherwise, the `Option` will be `None`.
//
// We can use LocalDefId here since fake read statements are removed
// before codegen in the `CleanupNonCodegenStatements` pass.
ForMatchedPlace(Option<LocalDefId>),
/// A fake read injected into a match guard to ensure that the places
/// bound by the pattern are immutable for the duration of the match guard.
///
/// Within a match guard, references are created for each place that the
/// pattern creates a binding for — this is known as the `RefWithinGuard`
/// version of the variables. To make sure that the references stay
/// alive until the end of the match guard, and properly prevent the
/// places in question from being modified, a `FakeRead(ForGuardBinding)`
/// is inserted at the end of the match guard.
///
/// For details on how these references are created, see the extensive
/// documentation on `bind_matched_candidate_for_guard` in
/// `rustc_mir_build`.
ForGuardBinding,
/// Officially, the semantics of
///
/// `let pattern = <expr>;`
///
/// is that `<expr>` is evaluated into a temporary and then this temporary is
/// into the pattern.
///
/// However, if we see the simple pattern `let var = <expr>`, we optimize this to
/// evaluate `<expr>` directly into the variable `var`. This is mostly unobservable,
/// but in some cases it can affect the borrow checker, as in #53695.
///
/// Therefore, we insert a `FakeRead(ForLet)` immediately after each `let`
/// with a trivial pattern.
///
/// FIXME: `ExprUseVisitor` has an entirely different opinion on what `FakeRead(ForLet)`
/// is supposed to mean. If it was accurate to what MIR lowering does,
/// would it even make sense to hoist these out of closures like
/// `ForMatchedPlace`?
ForLet(Option<LocalDefId>),
/// Currently, index expressions overloaded through the `Index` trait
/// get lowered differently than index expressions with builtin semantics
/// for arrays and slices — the latter will emit code to perform
/// bound checks, and then return a MIR place that will only perform the
/// indexing "for real" when it gets incorporated into an instruction.
///
/// This is observable in the fact that the following compiles:
///
/// ```
/// fn f(x: &mut [&mut [u32]], i: usize) {
/// x[i][x[i].len() - 1] += 1;
/// }
/// ```
///
/// However, we need to be careful to not let the user invalidate the
/// bound check with an expression like
///
/// `(*x)[1][{ x = y; 4}]`
///
/// Here, the first bounds check would be invalidated when we evaluate the
/// second index expression. To make sure that this doesn't happen, we
/// create a fake borrow of `x` and hold it while we evaluate the second
/// index.
///
/// This borrow is kept alive by a `FakeRead(ForIndex)` at the end of its
/// scope.
ForIndex,
}
#[derive(Clone, Debug, PartialEq, TyEncodable, TyDecodable, Hash, HashStable)]
#[derive(TypeFoldable, TypeVisitable)]
pub struct CopyNonOverlapping<'tcx> {
pub src: Operand<'tcx>,
pub dst: Operand<'tcx>,
/// Number of elements to copy from src to dest, not bytes.
pub count: Operand<'tcx>,
}
/// Represents how a [`TerminatorKind::Call`] was constructed.
/// Used only for diagnostics.
#[derive(Clone, Copy, TyEncodable, TyDecodable, Debug, PartialEq, Hash, HashStable)]
#[derive(TypeFoldable, TypeVisitable)]
pub enum CallSource {
/// This came from something such as `a > b` or `a + b`. In THIR, if `from_hir_call`
/// is false then this is the desugaring.
OverloadedOperator,
/// This was from comparison generated by a match, used by const-eval for better errors
/// when the comparison cannot be done in compile time.
///
/// (see <https://github.com/rust-lang/rust/issues/90237>)
MatchCmp,
/// Other types of desugaring that did not come from the HIR, but we don't care about
/// for diagnostics (yet).
Misc,
/// Use of value, generating a clone function call
Use,
/// Normal function call, no special source
Normal,
}
#[derive(Clone, Copy, Debug, TyEncodable, TyDecodable, Hash, HashStable, PartialEq)]
#[derive(TypeFoldable, TypeVisitable)]
/// The macro that an inline assembly block was created by
pub enum InlineAsmMacro {
/// The `asm!` macro
Asm,
/// The `naked_asm!` macro
NakedAsm,
}
///////////////////////////////////////////////////////////////////////////
// Terminators
/// The various kinds of terminators, representing ways of exiting from a basic block.
///
/// A note on unwinding: Panics may occur during the execution of some terminators. Depending on the
/// `-C panic` flag, this may either cause the program to abort or the call stack to unwind. Such
/// terminators have a `unwind: UnwindAction` field on them. If stack unwinding occurs, then
/// once the current function is reached, an action will be taken based on the `unwind` field.
/// If the action is `Cleanup`, then the execution continues at the given basic block. If the
/// action is `Continue` then no cleanup is performed, and the stack continues unwinding.
///
/// The basic block pointed to by a `Cleanup` unwind action must have its `cleanup` flag set.
/// `cleanup` basic blocks have a couple restrictions:
/// 1. All `unwind` fields in them must be `UnwindAction::Terminate` or `UnwindAction::Unreachable`.
/// 2. `Return` terminators are not allowed in them. `Terminate` and `Resume` terminators are.
/// 3. All other basic blocks (in the current body) that are reachable from `cleanup` basic blocks
/// must also be `cleanup`. This is a part of the type system and checked statically, so it is
/// still an error to have such an edge in the CFG even if it's known that it won't be taken at
/// runtime.
/// 4. The control flow between cleanup blocks must look like an upside down tree. Roughly
/// speaking, this means that control flow that looks like a V is allowed, while control flow
/// that looks like a W is not. This is necessary to ensure that landing pad information can be
/// correctly codegened on MSVC. More precisely:
///
/// Begin with the standard control flow graph `G`. Modify `G` as follows: for any two cleanup
/// vertices `u` and `v` such that `u` dominates `v`, contract `u` and `v` into a single vertex,
/// deleting self edges and duplicate edges in the process. Now remove all vertices from `G`
/// that are not cleanup vertices or are not reachable. The resulting graph must be an inverted
/// tree, that is each vertex may have at most one successor and there may be no cycles.
#[derive(Clone, TyEncodable, TyDecodable, Hash, HashStable, PartialEq, TypeFoldable, TypeVisitable)]
pub enum TerminatorKind<'tcx> {
/// Block has one successor; we continue execution there.
Goto { target: BasicBlock },
/// Switches based on the computed value.
///
/// First, evaluates the `discr` operand. The type of the operand must be a signed or unsigned
/// integer, char, or bool, and must match the given type. Then, if the list of switch targets
/// contains the computed value, continues execution at the associated basic block. Otherwise,
/// continues execution at the "otherwise" basic block.
///
/// Target values may not appear more than once.
SwitchInt {
/// The discriminant value being tested.
discr: Operand<'tcx>,
targets: SwitchTargets,
},
/// Indicates that the landing pad is finished and that the process should continue unwinding.
///
/// Like a return, this marks the end of this invocation of the function.
///
/// Only permitted in cleanup blocks. `Resume` is not permitted with `-C unwind=abort` after
/// deaggregation runs.
UnwindResume,
/// Indicates that the landing pad is finished and that the process should terminate.
///
/// Used to prevent unwinding for foreign items or with `-C unwind=abort`. Only permitted in
/// cleanup blocks.
UnwindTerminate(UnwindTerminateReason),
/// Returns from the function.
///
/// Like function calls, the exact semantics of returns in Rust are unclear. Returning very
/// likely at least assigns the value currently in the return place (`_0`) to the place
/// specified in the associated `Call` terminator in the calling function, as if assigned via
/// `dest = move _0`. It might additionally do other things, like have side-effects in the
/// aliasing model.
///
/// If the body is a coroutine body, this has slightly different semantics; it instead causes a
/// `CoroutineState::Returned(_0)` to be created (as if by an `Aggregate` rvalue) and assigned
/// to the return place.
Return,
/// Indicates a terminator that can never be reached.
///
/// Executing this terminator is UB.
Unreachable,
/// The behavior of this statement differs significantly before and after drop elaboration.
///
/// After drop elaboration: `Drop` terminators are a complete nop for types that have no drop
/// glue. For other types, `Drop` terminators behave exactly like a call to
/// `core::mem::drop_in_place` with a pointer to the given place.
///
/// `Drop` before drop elaboration is a *conditional* execution of the drop glue. Specifically,
/// the `Drop` will be executed if...
///
/// **Needs clarification**: End of that sentence. This in effect should document the exact
/// behavior of drop elaboration. The following sounds vaguely right, but I'm not quite sure:
///
/// > The drop glue is executed if, among all statements executed within this `Body`, an assignment to
/// > the place or one of its "parents" occurred more recently than a move out of it. This does not
/// > consider indirect assignments.
///
/// The `replace` flag indicates whether this terminator was created as part of an assignment.
/// This should only be used for diagnostic purposes, and does not have any operational
/// meaning.
///
/// Async drop processing:
/// In compiler/rustc_mir_build/src/build/scope.rs we detect possible async drop:
/// drop of object with `needs_async_drop`.
/// Async drop later, in StateTransform pass, may be expanded into additional yield-point
/// for poll-loop of async drop future.
/// So we need prepared 'drop' target block in the similar way as for `Yield` terminator
/// (see `drops.build_mir::<CoroutineDrop>` in scopes.rs).
/// In compiler/rustc_mir_transform/src/elaborate_drops.rs for object implementing `AsyncDrop` trait
/// we need to prepare async drop feature - resolve `AsyncDrop::drop` and codegen call.
/// `async_fut` is set to the corresponding local.
/// For coroutine drop we don't need this logic because coroutine drop works with the same
/// layout object as coroutine itself. So `async_fut` will be `None` for coroutine drop.
/// Both `drop` and `async_fut` fields are only used in compiler/rustc_mir_transform/src/coroutine.rs,
/// StateTransform pass. In `expand_async_drops` async drops are expanded
/// into one or two yield points with poll ready/pending switch.
/// When a coroutine has any internal async drop, the coroutine drop function will be async
/// (generated by `create_coroutine_drop_shim_async`, not `create_coroutine_drop_shim`).
Drop {
place: Place<'tcx>,
target: BasicBlock,
unwind: UnwindAction,
replace: bool,
/// Cleanup to be done if the coroutine is dropped at this suspend point (for async drop).
drop: Option<BasicBlock>,
/// Prepared async future local (for async drop)
async_fut: Option<Local>,
},
/// Roughly speaking, evaluates the `func` operand and the arguments, and starts execution of
/// the referred to function. The operand types must match the argument types of the function.
/// The return place type must match the return type. The type of the `func` operand must be
/// callable, meaning either a function pointer, a function type, or a closure type.
///
/// **Needs clarification**: The exact semantics of this. Current backends rely on `move`
/// operands not aliasing the return place. It is unclear how this is justified in MIR, see
/// [#71117].
///
/// [#71117]: https://github.com/rust-lang/rust/issues/71117
Call {
/// The function that’s being called.
func: Operand<'tcx>,
/// Arguments the function is called with.
/// These are owned by the callee, which is free to modify them.
/// This allows the memory occupied by "by-value" arguments to be
/// reused across function calls without duplicating the contents.
/// The span for each arg is also included
/// (e.g. `a` and `b` in `x.foo(a, b)`).
args: Box<[Spanned<Operand<'tcx>>]>,
/// Where the returned value will be written
destination: Place<'tcx>,
/// Where to go after this call returns. If none, the call necessarily diverges.
target: Option<BasicBlock>,
/// Action to be taken if the call unwinds.
unwind: UnwindAction,
/// Where this call came from in HIR/THIR.
call_source: CallSource,
/// This `Span` is the span of the function, without the dot and receiver
/// e.g. `foo(a, b)` in `x.foo(a, b)`
fn_span: Span,
},
/// Tail call.
///
/// Roughly speaking this is a chimera of [`Call`] and [`Return`], with some caveats.
/// Semantically tail calls consists of two actions:
/// - pop of the current stack frame
/// - a call to the `func`, with the return address of the **current** caller
/// - so that a `return` inside `func` returns to the caller of the caller
/// of the function that is currently being executed
///
/// Note that in difference with [`Call`] this is missing
/// - `destination` (because it's always the return place)
/// - `target` (because it's always taken from the current stack frame)
/// - `unwind` (because it's always taken from the current stack frame)
///
/// [`Call`]: TerminatorKind::Call
/// [`Return`]: TerminatorKind::Return
TailCall {
/// The function that’s being called.
func: Operand<'tcx>,
/// Arguments the function is called with.
/// These are owned by the callee, which is free to modify them.
/// This allows the memory occupied by "by-value" arguments to be
/// reused across function calls without duplicating the contents.
args: Box<[Spanned<Operand<'tcx>>]>,
// FIXME(explicit_tail_calls): should we have the span for `become`? is this span accurate? do we need it?
/// This `Span` is the span of the function, without the dot and receiver
/// (e.g. `foo(a, b)` in `x.foo(a, b)`
fn_span: Span,
},
/// Evaluates the operand, which must have type `bool`. If it is not equal to `expected`,
/// initiates a panic. Initiating a panic corresponds to a `Call` terminator with some
/// unspecified constant as the function to call, all the operands stored in the `AssertMessage`
/// as parameters, and `None` for the destination. Keep in mind that the `cleanup` path is not
/// necessarily executed even in the case of a panic, for example in `-C panic=abort`. If the
/// assertion does not fail, execution continues at the specified basic block.
///
/// When overflow checking is disabled and this is run-time MIR (as opposed to compile-time MIR
/// that is used for CTFE), the following variants of this terminator behave as `goto target`:
/// - `OverflowNeg(..)`,
/// - `Overflow(op, ..)` if op is add, sub, mul, shl, shr, but NOT div or rem.
Assert {
cond: Operand<'tcx>,
expected: bool,
msg: Box<AssertMessage<'tcx>>,
target: BasicBlock,
unwind: UnwindAction,
},
/// Marks a suspend point.
///
/// Like `Return` terminators in coroutine bodies, this computes `value` and then a
/// `CoroutineState::Yielded(value)` as if by `Aggregate` rvalue. That value is then assigned to
/// the return place of the function calling this one, and execution continues in the calling
/// function. When next invoked with the same first argument, execution of this function
/// continues at the `resume` basic block, with the second argument written to the `resume_arg`
/// place. If the coroutine is dropped before then, the `drop` basic block is invoked.
///
/// Note that coroutines can be (unstably) cloned under certain conditions, which means that
/// this terminator can **return multiple times**! MIR optimizations that reorder code into
/// different basic blocks needs to be aware of that.
/// See <https://github.com/rust-lang/rust/issues/95360>.
///
/// Not permitted in bodies that are not coroutine bodies, or after coroutine lowering.
///
/// **Needs clarification**: What about the evaluation order of the `resume_arg` and `value`?
Yield {
/// The value to return.
value: Operand<'tcx>,
/// Where to resume to.
resume: BasicBlock,
/// The place to store the resume argument in.
resume_arg: Place<'tcx>,
/// Cleanup to be done if the coroutine is dropped at this suspend point.
drop: Option<BasicBlock>,
},
/// Indicates the end of dropping a coroutine.
///
/// Semantically just a `return` (from the coroutines drop glue). Only permitted in the same situations
/// as `yield`.
///
/// **Needs clarification**: Is that even correct? The coroutine drop code is always confusing
/// to me, because it's not even really in the current body.
///
/// **Needs clarification**: Are there type system constraints on these terminators? Should
/// there be a "block type" like `cleanup` blocks for them?
CoroutineDrop,
/// A block where control flow only ever takes one real path, but borrowck needs to be more
/// conservative.
///
/// At runtime this is semantically just a goto.
///
/// Disallowed after drop elaboration.
FalseEdge {
/// The target normal control flow will take.
real_target: BasicBlock,
/// A block control flow could conceptually jump to, but won't in
/// practice.
imaginary_target: BasicBlock,
},
/// A terminator for blocks that only take one path in reality, but where we reserve the right
/// to unwind in borrowck, even if it won't happen in practice. This can arise in infinite loops
/// with no function calls for example.
///
/// At runtime this is semantically just a goto.
///
/// Disallowed after drop elaboration.
FalseUnwind {
/// The target normal control flow will take.
real_target: BasicBlock,
/// The imaginary cleanup block link. This particular path will never be taken
/// in practice, but in order to avoid fragility we want to always
/// consider it in borrowck. We don't want to accept programs which
/// pass borrowck only when `panic=abort` or some assertions are disabled
/// due to release vs. debug mode builds.
unwind: UnwindAction,
},
/// Block ends with an inline assembly block. This is a terminator since
/// inline assembly is allowed to diverge.
InlineAsm {
/// Macro used to create this inline asm: one of `asm!` or `naked_asm!`
asm_macro: InlineAsmMacro,
/// The template for the inline assembly, with placeholders.
#[type_foldable(identity)]
#[type_visitable(ignore)]
template: &'tcx [InlineAsmTemplatePiece],
/// The operands for the inline assembly, as `Operand`s or `Place`s.
operands: Box<[InlineAsmOperand<'tcx>]>,
/// Miscellaneous options for the inline assembly.
options: InlineAsmOptions,
/// Source spans for each line of the inline assembly code. These are
/// used to map assembler errors back to the line in the source code.
#[type_foldable(identity)]
#[type_visitable(ignore)]
line_spans: &'tcx [Span],
/// Valid targets for the inline assembly.
/// The first element is the fallthrough destination, unless
/// asm_macro == InlineAsmMacro::NakedAsm or InlineAsmOptions::NORETURN is set.
targets: Box<[BasicBlock]>,
/// Action to be taken if the inline assembly unwinds. This is present
/// if and only if InlineAsmOptions::MAY_UNWIND is set.
unwind: UnwindAction,
},
}
#[derive(
Clone,
Debug,
TyEncodable,
TyDecodable,
Hash,
HashStable,
PartialEq,
TypeFoldable,
TypeVisitable
)]
pub enum BackwardIncompatibleDropReason {
Edition2024,
}
#[derive(Debug, Clone, TyEncodable, TyDecodable, Hash, HashStable, PartialEq)]
pub struct SwitchTargets {
/// Possible values. For each value, the location to branch to is found in
/// the corresponding element in the `targets` vector.
pub(super) values: SmallVec<[Pu128; 1]>,
/// Possible branch targets. The last element of this vector is used for
/// the "otherwise" branch, so `targets.len() == values.len() + 1` always
/// holds.
//
// Note: This invariant is non-obvious and easy to violate. This would be a
// more rigorous representation:
//
// normal: SmallVec<[(Pu128, BasicBlock); 1]>,
// otherwise: BasicBlock,
//
// But it's important to have the targets in a sliceable type, because
// target slices show up elsewhere. E.g. `TerminatorKind::InlineAsm` has a
// boxed slice, and `TerminatorKind::FalseEdge` has a single target that
// can be converted to a slice with `slice::from_ref`.
//
// Why does this matter? In functions like `TerminatorKind::successors` we
// return `impl Iterator` and a non-slice-of-targets representation here
// causes problems because multiple different concrete iterator types would
// be involved and we would need a boxed trait object, which requires an
// allocation, which is expensive if done frequently.
pub(super) targets: SmallVec<[BasicBlock; 2]>,
}
/// Action to be taken when a stack unwind happens.
#[derive(Copy, Clone, Debug, PartialEq, Eq, TyEncodable, TyDecodable, Hash, HashStable)]
#[derive(TypeFoldable, TypeVisitable)]
pub enum UnwindAction {
/// No action is to be taken. Continue unwinding.
///
/// This is similar to `Cleanup(bb)` where `bb` does nothing but `Resume`, but they are not
/// equivalent, as presence of `Cleanup(_)` will make a frame non-POF.
Continue,
/// Triggers undefined behavior if unwind happens.
Unreachable,
/// Terminates the execution if unwind happens.
///
/// Depending on the platform and situation this may cause a non-unwindable panic or abort.
Terminate(UnwindTerminateReason),
/// Cleanups to be done.
Cleanup(BasicBlock),
}
/// The reason we are terminating the process during unwinding.
#[derive(Copy, Clone, Debug, PartialEq, Eq, TyEncodable, TyDecodable, Hash, HashStable)]
#[derive(TypeFoldable, TypeVisitable)]
pub enum UnwindTerminateReason {
/// Unwinding is just not possible given the ABI of this function.
Abi,
/// We were already cleaning up for an ongoing unwind, and a *second*, *nested* unwind was
/// triggered by the drop glue.
InCleanup,
}
/// Information about an assertion failure.
#[derive(Clone, Hash, HashStable, PartialEq, Debug)]
#[derive(TyEncodable, TyDecodable, TypeFoldable, TypeVisitable)]
pub enum AssertKind<O> {
BoundsCheck { len: O, index: O },
Overflow(BinOp, O, O),
OverflowNeg(O),
DivisionByZero(O),
RemainderByZero(O),
ResumedAfterReturn(CoroutineKind),
ResumedAfterPanic(CoroutineKind),
ResumedAfterDrop(CoroutineKind),
MisalignedPointerDereference { required: O, found: O },
NullPointerDereference,
InvalidEnumConstruction(O),
}
#[derive(Clone, Debug, PartialEq, TyEncodable, TyDecodable, Hash, HashStable)]
#[derive(TypeFoldable, TypeVisitable)]
pub enum InlineAsmOperand<'tcx> {
In {
reg: InlineAsmRegOrRegClass,
value: Operand<'tcx>,
},
Out {
reg: InlineAsmRegOrRegClass,
late: bool,
place: Option<Place<'tcx>>,
},
InOut {
reg: InlineAsmRegOrRegClass,
late: bool,
in_value: Operand<'tcx>,
out_place: Option<Place<'tcx>>,
},
Const {
value: Box<ConstOperand<'tcx>>,
},
SymFn {
value: Box<ConstOperand<'tcx>>,
},
SymStatic {
def_id: DefId,
},
Label {
/// This represents the index into the `targets` array in `TerminatorKind::InlineAsm`.
target_index: usize,
},
}
/// Type for MIR `Assert` terminator error messages.
pub type AssertMessage<'tcx> = AssertKind<Operand<'tcx>>;
///////////////////////////////////////////////////////////////////////////
// Places
/// Places roughly correspond to a "location in memory." Places in MIR are the same mathematical
/// object as places in Rust. This of course means that what exactly they are is undecided and part
/// of the Rust memory model. However, they will likely contain at least the following pieces of
/// information in some form:
///
/// 1. The address in memory that the place refers to.
/// 2. The provenance with which the place is being accessed.
/// 3. The type of the place and an optional variant index. See [`PlaceTy`][super::PlaceTy].
/// 4. Optionally, some metadata. This exists if and only if the type of the place is not `Sized`.
///
/// We'll give a description below of how all pieces of the place except for the provenance are
/// calculated. We cannot give a description of the provenance, because that is part of the
/// undecided aliasing model - we only include it here at all to acknowledge its existence.
///
/// Each local naturally corresponds to the place `Place { local, projection: [] }`. This place has
/// the address of the local's allocation and the type of the local.
///
/// For places that are not locals, ie they have a non-empty list of projections, we define the
/// values as a function of the parent place, that is the place with its last [`ProjectionElem`]
/// stripped. The way this is computed of course depends on the kind of that last projection
/// element:
///
/// - [`Downcast`](ProjectionElem::Downcast): This projection sets the place's variant index to the
/// given one, and makes no other changes. A `Downcast` projection must always be followed
/// immediately by a `Field` projection.
/// - [`Field`](ProjectionElem::Field): `Field` projections take their parent place and create a
/// place referring to one of the fields of the type. The resulting address is the parent
/// address, plus the offset of the field. The type becomes the type of the field. If the parent
/// was unsized and so had metadata associated with it, then the metadata is retained if the
/// field is unsized and thrown out if it is sized.
///
/// These projections are only legal for tuples, ADTs, closures, and coroutines. If the ADT or
/// coroutine has more than one variant, the parent place's variant index must be set, indicating
/// which variant is being used. If it has just one variant, the variant index may or may not be
/// included - the single possible variant is inferred if it is not included.
/// - [`OpaqueCast`](ProjectionElem::OpaqueCast): This projection changes the place's type to the
/// given one, and makes no other changes. A `OpaqueCast` projection on any type other than an
/// opaque type from the current crate is not well-formed.
/// - [`ConstantIndex`](ProjectionElem::ConstantIndex): Computes an offset in units of `T` into the
/// place as described in the documentation for the `ProjectionElem`. The resulting address is
/// the parent's address plus that offset, and the type is `T`. This is only legal if the parent
/// place has type `[T; N]` or `[T]` (*not* `&[T]`). Since such a `T` is always sized, any
/// resulting metadata is thrown out.
/// - [`Subslice`](ProjectionElem::Subslice): This projection calculates an offset and a new
/// address in a similar manner as `ConstantIndex`. It is also only legal on `[T; N]` and `[T]`.
/// However, this yields a `Place` of type `[T]`, and additionally sets the metadata to be the
/// length of the subslice.
/// - [`Index`](ProjectionElem::Index): Like `ConstantIndex`, only legal on `[T; N]` or `[T]`.
/// However, `Index` additionally takes a local from which the value of the index is computed at
/// runtime. Computing the value of the index involves interpreting the `Local` as a
/// `Place { local, projection: [] }`, and then computing its value as if done via
/// [`Operand::Copy`]. The array/slice is then indexed with the resulting value. The local must
/// have type `usize`.
/// - [`Deref`](ProjectionElem::Deref): Derefs are the last type of projection, and the most
/// complicated. They are only legal on parent places that are references, pointers, or `Box`. A
/// `Deref` projection begins by loading a value from the parent place, as if by
/// [`Operand::Copy`]. It then dereferences the resulting pointer, creating a place of the
/// pointee's type. The resulting address is the address that was stored in the pointer. If the
/// pointee type is unsized, the pointer additionally stored the value of the metadata.
///
/// The "validity invariant" of places is the same as that of raw pointers, meaning that e.g.
/// `*ptr` on a dangling or unaligned pointer is never UB. (Later doing a load/store on that place
/// or turning it into a reference can be UB though!) The only ways for a place computation can
/// cause UB are:
/// - On a `Deref` projection, we do an actual load of the inner place, with all the usual
/// consequences (the inner place must be based on an aligned pointer, it must point to allocated
/// memory, the aliasig model must allow reads, this must not be a data race).
/// - For the projections that perform pointer arithmetic, the offset must in-bounds of an
/// allocation (i.e., the preconditions of `ptr::offset` must be met).
#[derive(Copy, Clone, PartialEq, Eq, Hash, TyEncodable, HashStable, TypeFoldable, TypeVisitable)]
pub struct Place<'tcx> {
pub local: Local,
/// projection out of a place (access a field, deref a pointer, etc)
pub projection: &'tcx List<PlaceElem<'tcx>>,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
#[derive(TyEncodable, TyDecodable, HashStable, TypeFoldable, TypeVisitable)]
pub enum ProjectionElem<V, T> {
Deref,
/// A field (e.g., `f` in `_1.f`) is one variant of [`ProjectionElem`]. Conceptually,
/// rustc can identify that a field projection refers to either two different regions of memory
/// or the same one between the base and the 'projection element'.
/// Read more about projections in the [rustc-dev-guide][mir-datatypes]
///
/// [mir-datatypes]: https://rustc-dev-guide.rust-lang.org/mir/index.html#mir-data-types
Field(FieldIdx, T),
/// Index into a slice/array.
///
/// Note that this does not also dereference, and so it does not exactly correspond to slice
/// indexing in Rust. In other words, in the below Rust code:
///
/// ```rust
/// let x = &[1, 2, 3, 4];
/// let i = 2;
/// x[i];
/// ```
///
/// The `x[i]` is turned into a `Deref` followed by an `Index`, not just an `Index`. The same
/// thing is true of the `ConstantIndex` and `Subslice` projections below.
Index(V),
/// These indices are generated by slice patterns. Easiest to explain
/// by example:
///
/// ```ignore (illustrative)
/// [X, _, .._, _, _] => { offset: 0, min_length: 4, from_end: false },
/// [_, X, .._, _, _] => { offset: 1, min_length: 4, from_end: false },
/// [_, _, .._, X, _] => { offset: 2, min_length: 4, from_end: true },
/// [_, _, .._, _, X] => { offset: 1, min_length: 4, from_end: true },
/// ```
ConstantIndex {
/// index or -index (in Python terms), depending on from_end
offset: u64,
/// The thing being indexed must be at least this long -- otherwise, the
/// projection is UB.
///
/// For arrays this is always the exact length.
min_length: u64,
/// Counting backwards from end? This is always false when indexing an
/// array.
from_end: bool,
},
/// These indices are generated by slice patterns.
///
/// If `from_end` is true `slice[from..slice.len() - to]`.
/// Otherwise `array[from..to]`.
Subslice {
from: u64,
to: u64,
/// Whether `to` counts from the start or end of the array/slice.
/// For `PlaceElem`s this is `true` if and only if the base is a slice.
/// For `ProjectionKind`, this can also be `true` for arrays.
from_end: bool,
},
/// "Downcast" to a variant of an enum or a coroutine.
///
/// The included Symbol is the name of the variant, used for printing MIR.
///
/// This operation itself is never UB, all it does is change the type of the place.
Downcast(Option<Symbol>, VariantIdx),
/// Like an explicit cast from an opaque type to a concrete type, but without
/// requiring an intermediate variable.
///
/// This is unused with `-Znext-solver`.
OpaqueCast(T),
/// A transmute from an unsafe binder to the type that it wraps. This is a projection
/// of a place, so it doesn't necessarily constitute a move out of the binder.
UnwrapUnsafeBinder(T),
/// A `Subtype(T)` projection is applied to any `StatementKind::Assign` where
/// type of lvalue doesn't match the type of rvalue, the primary goal is making subtyping
/// explicit during optimizations and codegen.
///
/// This projection doesn't impact the runtime behavior of the program except for potentially changing
/// some type metadata of the interpreter or codegen backend.
///
/// This goal is achieved with mir_transform pass `Subtyper`, which runs right after
/// borrowchecker, as we only care about subtyping that can affect trait selection and
/// `TypeId`.
Subtype(T),
}
/// Alias for projections as they appear in places, where the base is a place
/// and the index is a local.
pub type PlaceElem<'tcx> = ProjectionElem<Local, Ty<'tcx>>;
///////////////////////////////////////////////////////////////////////////
// Operands
/// An operand in MIR represents a "value" in Rust, the definition of which is undecided and part of
/// the memory model. One proposal for a definition of values can be found [on UCG][value-def].
///
/// [value-def]: https://github.com/rust-lang/unsafe-code-guidelines/blob/master/wip/value-domain.md
///
/// The most common way to create values is via loading a place. Loading a place is an operation
/// which reads the memory of the place and converts it to a value. This is a fundamentally *typed*
/// operation. The nature of the value produced depends on the type of the conversion. Furthermore,
/// there may be other effects: if the type has a validity constraint loading the place might be UB
/// if the validity constraint is not met.
///
/// **Needs clarification:** Is loading a place that has its variant index set well-formed? Miri
/// currently implements it, but it seems like this may be something to check against in the
/// validator.
#[derive(Clone, PartialEq, TyEncodable, TyDecodable, Hash, HashStable, TypeFoldable, TypeVisitable)]
pub enum Operand<'tcx> {
/// Creates a value by loading the given place.
///
/// Before drop elaboration, the type of the place must be `Copy`. After drop elaboration there
/// is no such requirement.
Copy(Place<'tcx>),
/// Creates a value by performing loading the place, just like the `Copy` operand.
///
/// This *may* additionally overwrite the place with `uninit` bytes, depending on how we decide
/// in [UCG#188]. You should not emit MIR that may attempt a subsequent second load of this
/// place without first re-initializing it.
///
/// **Needs clarification:** The operational impact of `Move` is unclear. Currently (both in
/// Miri and codegen) it has no effect at all unless it appears in an argument to `Call`; for
/// `Call` it allows the argument to be passed to the callee "in-place", i.e. the callee might
/// just get a reference to this place instead of a full copy. Miri implements this with a
/// combination of aliasing model "protectors" and putting `uninit` into the place. Ralf
/// proposes that we don't want these semantics for `Move` in regular assignments, because
/// loading a place should not have side-effects, and the aliasing model "protectors" are
/// inherently tied to a function call. Are these the semantics we want for MIR? Is this
/// something we can even decide without knowing more about Rust's memory model?
///
/// [UCG#188]: https://github.com/rust-lang/unsafe-code-guidelines/issues/188
Move(Place<'tcx>),
/// Constants are already semantically values, and remain unchanged.
Constant(Box<ConstOperand<'tcx>>),
}
#[derive(Clone, Copy, PartialEq, TyEncodable, TyDecodable, Hash, HashStable)]
#[derive(TypeFoldable, TypeVisitable)]
pub struct ConstOperand<'tcx> {
pub span: Span,
/// Optional user-given type: for something like
/// `collect::<Vec<_>>`, this would be present and would
/// indicate that `Vec<_>` was explicitly specified.
///
/// Needed for NLL to impose user-given type constraints.
pub user_ty: Option<UserTypeAnnotationIndex>,
pub const_: Const<'tcx>,
}
///////////////////////////////////////////////////////////////////////////
// Rvalues
/// The various kinds of rvalues that can appear in MIR.
///
/// Not all of these are allowed at every [`MirPhase`] - when this is the case, it's stated below.
///
/// Computing any rvalue begins by evaluating the places and operands in some order (**Needs
/// clarification**: Which order?). These are then used to produce a "value" - the same kind of
/// value that an [`Operand`] produces.
#[derive(Clone, TyEncodable, TyDecodable, Hash, HashStable, PartialEq, TypeFoldable, TypeVisitable)]
pub enum Rvalue<'tcx> {
/// Yields the operand unchanged
Use(Operand<'tcx>),
/// Creates an array where each element is the value of the operand.
///
/// This is the cause of a bug in the case where the repetition count is zero because the value
/// is not dropped, see [#74836].
///
/// Corresponds to source code like `[x; 32]`.
///
/// [#74836]: https://github.com/rust-lang/rust/issues/74836
Repeat(Operand<'tcx>, ty::Const<'tcx>),
/// Creates a reference of the indicated kind to the place.
///
/// There is not much to document here, because besides the obvious parts the semantics of this
/// are essentially entirely a part of the aliasing model. There are many UCG issues discussing
/// exactly what the behavior of this operation should be.
///
/// `Shallow` borrows are disallowed after drop lowering.
Ref(Region<'tcx>, BorrowKind, Place<'tcx>),
/// Creates a pointer/reference to the given thread local.
///
/// The yielded type is a `*mut T` if the static is mutable, otherwise if the static is extern a
/// `*const T`, and if neither of those apply a `&T`.
///
/// **Note:** This is a runtime operation that actually executes code and is in this sense more
/// like a function call. Also, eliminating dead stores of this rvalue causes `fn main() {}` to
/// SIGILL for some reason that I (JakobDegen) never got a chance to look into.
///
/// **Needs clarification**: Are there weird additional semantics here related to the runtime
/// nature of this operation?
ThreadLocalRef(DefId),
/// Creates a raw pointer with the indicated mutability to the place.
///
/// This is generated by pointer casts like `&v as *const _` or raw borrow expressions like
/// `&raw const v`.
///
/// Like with references, the semantics of this operation are heavily dependent on the aliasing
/// model.
RawPtr(RawPtrKind, Place<'tcx>),
/// Yields the length of the place, as a `usize`.
///
/// If the type of the place is an array, this is the array length. For slices (`[T]`, not
/// `&[T]`) this accesses the place's metadata to determine the length. This rvalue is
/// ill-formed for places of other types.
///
/// This cannot be a `UnOp(PtrMetadata, _)` because that expects a value, and we only
/// have a place, and `UnOp(PtrMetadata, RawPtr(place))` is not a thing.
Len(Place<'tcx>),
/// Performs essentially all of the casts that can be performed via `as`.
///
/// This allows for casts from/to a variety of types.
///
/// **FIXME**: Document exactly which `CastKind`s allow which types of casts.
Cast(CastKind, Operand<'tcx>, Ty<'tcx>),
/// * `Offset` has the same semantics as [`offset`](pointer::offset), except that the second
/// parameter may be a `usize` as well.
/// * The comparison operations accept `bool`s, `char`s, signed or unsigned integers, floats,
/// raw pointers, or function pointers and return a `bool`. The types of the operands must be
/// matching, up to the usual caveat of the lifetimes in function pointers.
/// * Left and right shift operations accept signed or unsigned integers not necessarily of the
/// same type and return a value of the same type as their LHS. Like in Rust, the RHS is
/// truncated as needed.
/// * The `Bit*` operations accept signed integers, unsigned integers, or bools with matching
/// types and return a value of that type.
/// * The `FooWithOverflow` are like the `Foo`, but returning `(T, bool)` instead of just `T`,
/// where the `bool` is true if the result is not equal to the infinite-precision result.
/// * The remaining operations accept signed integers, unsigned integers, or floats with
/// matching types and return a value of that type.
BinaryOp(BinOp, Box<(Operand<'tcx>, Operand<'tcx>)>),
/// Computes a value as described by the operation.
NullaryOp(NullOp<'tcx>, Ty<'tcx>),
/// Exactly like `BinaryOp`, but less operands.
///
/// Also does two's-complement arithmetic. Negation requires a signed integer or a float;
/// bitwise not requires a signed integer, unsigned integer, or bool. Both operation kinds
/// return a value with the same type as their operand.
UnaryOp(UnOp, Operand<'tcx>),
/// Computes the discriminant of the place, returning it as an integer of type
/// [`discriminant_ty`]. Returns zero for types without discriminant.
///
/// The validity requirements for the underlying value are undecided for this rvalue, see
/// [#91095]. Note too that the value of the discriminant is not the same thing as the
/// variant index; use [`discriminant_for_variant`] to convert.
///
/// [`discriminant_ty`]: crate::ty::Ty::discriminant_ty
/// [#91095]: https://github.com/rust-lang/rust/issues/91095
/// [`discriminant_for_variant`]: crate::ty::Ty::discriminant_for_variant
Discriminant(Place<'tcx>),
/// Creates an aggregate value, like a tuple or struct.
///
/// This is needed because dataflow analysis needs to distinguish
/// `dest = Foo { x: ..., y: ... }` from `dest.x = ...; dest.y = ...;` in the case that `Foo`
/// has a destructor.
///
/// Disallowed after deaggregation for all aggregate kinds except `Array` and `Coroutine`. After
/// coroutine lowering, `Coroutine` aggregate kinds are disallowed too.
Aggregate(Box<AggregateKind<'tcx>>, IndexVec<FieldIdx, Operand<'tcx>>),
/// Transmutes a `*mut u8` into shallow-initialized `Box<T>`.
///
/// This is different from a normal transmute because dataflow analysis will treat the box as
/// initialized but its content as uninitialized. Like other pointer casts, this in general
/// affects alias analysis.
ShallowInitBox(Operand<'tcx>, Ty<'tcx>),
/// A CopyForDeref is equivalent to a read from a place at the
/// codegen level, but is treated specially by drop elaboration. When such a read happens, it
/// is guaranteed (via nature of the mir_opt `Derefer` in rustc_mir_transform/src/deref_separator)
/// that the only use of the returned value is a deref operation, immediately
/// followed by one or more projections. Drop elaboration treats this rvalue as if the
/// read never happened and just projects further. This allows simplifying various MIR
/// optimizations and codegen backends that previously had to handle deref operations anywhere
/// in a place.
CopyForDeref(Place<'tcx>),
/// Wraps a value in an unsafe binder.
WrapUnsafeBinder(Operand<'tcx>, Ty<'tcx>),
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, TyEncodable, TyDecodable, Hash, HashStable)]
pub enum CastKind {
/// An exposing pointer to address cast. A cast between a pointer and an integer type, or
/// between a function pointer and an integer type.
/// See the docs on `expose_provenance` for more details.
PointerExposeProvenance,
/// An address-to-pointer cast that picks up an exposed provenance.
/// See the docs on `with_exposed_provenance` for more details.
PointerWithExposedProvenance,
/// Pointer related casts that are done by coercions. Note that reference-to-raw-ptr casts are
/// translated into `&raw mut/const *r`, i.e., they are not actually casts.
///
/// The following are allowed in [`AnalysisPhase::Initial`] as they're needed for borrowck,
/// but after that are forbidden (including in all phases of runtime MIR):
/// * [`PointerCoercion::ArrayToPointer`]
/// * [`PointerCoercion::MutToConstPointer`]
///
/// Both are runtime nops, so should be [`CastKind::PtrToPtr`] instead in runtime MIR.
PointerCoercion(PointerCoercion, CoercionSource),
IntToInt,
FloatToInt,
FloatToFloat,
IntToFloat,
PtrToPtr,
FnPtrToPtr,
/// Reinterpret the bits of the input as a different type.
///
/// MIR is well-formed if the input and output types have different sizes,
/// but running a transmute between differently-sized types is UB.
Transmute,
}
/// Represents how a [`CastKind::PointerCoercion`] was constructed.
/// Used only for diagnostics.
#[derive(Clone, Copy, Debug, PartialEq, Eq, TyEncodable, TyDecodable, Hash, HashStable)]
pub enum CoercionSource {
/// The coercion was manually written by the user with an `as` cast.
AsCast,
/// The coercion was automatically inserted by the compiler.
Implicit,
}
#[derive(Clone, Debug, PartialEq, Eq, TyEncodable, TyDecodable, Hash, HashStable)]
#[derive(TypeFoldable, TypeVisitable)]
pub enum AggregateKind<'tcx> {
/// The type is of the element
Array(Ty<'tcx>),
Tuple,
/// The second field is the variant index. It's equal to 0 for struct
/// and union expressions. The last field is the
/// active field number and is present only for union expressions
/// -- e.g., for a union expression `SomeUnion { c: .. }`, the
/// active field index would identity the field `c`
Adt(DefId, VariantIdx, GenericArgsRef<'tcx>, Option<UserTypeAnnotationIndex>, Option<FieldIdx>),
Closure(DefId, GenericArgsRef<'tcx>),
Coroutine(DefId, GenericArgsRef<'tcx>),
CoroutineClosure(DefId, GenericArgsRef<'tcx>),
/// Construct a raw pointer from the data pointer and metadata.
///
/// The `Ty` here is the type of the *pointee*, not the pointer itself.
/// The `Mutability` indicates whether this produces a `*const` or `*mut`.
///
/// The [`Rvalue::Aggregate`] operands for thus must be
///
/// 0. A raw pointer of matching mutability with any [`core::ptr::Thin`] pointee
/// 1. A value of the appropriate [`core::ptr::Pointee::Metadata`] type
///
/// *Both* operands must always be included, even the unit value if this is
/// creating a thin pointer. If you're just converting between thin pointers,
/// you may want an [`Rvalue::Cast`] with [`CastKind::PtrToPtr`] instead.
RawPtr(Ty<'tcx>, Mutability),
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, TyEncodable, TyDecodable, Hash, HashStable)]
pub enum NullOp<'tcx> {
/// Returns the size of a value of that type
SizeOf,
/// Returns the minimum alignment of a type
AlignOf,
/// Returns the offset of a field
OffsetOf(&'tcx List<(VariantIdx, FieldIdx)>),
/// Returns whether we should perform some UB-checking at runtime.
/// See the `ub_checks` intrinsic docs for details.
UbChecks,
/// Returns whether we should perform contract-checking at runtime.
/// See the `contract_checks` intrinsic docs for details.
ContractChecks,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
#[derive(HashStable, TyEncodable, TyDecodable, TypeFoldable, TypeVisitable)]
pub enum UnOp {
/// The `!` operator for logical inversion
Not,
/// The `-` operator for negation
Neg,
/// Gets the metadata `M` from a `*const`/`*mut`/`&`/`&mut` to
/// `impl Pointee<Metadata = M>`.
///
/// For example, this will give a `()` from `*const i32`, a `usize` from
/// `&mut [u8]`, or a `ptr::DynMetadata<dyn Foo>` (internally a pointer)
/// from a `*mut dyn Foo`.
///
/// Allowed only in [`MirPhase::Runtime`]; earlier it's an intrinsic.
PtrMetadata,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
#[derive(TyEncodable, TyDecodable, HashStable, TypeFoldable, TypeVisitable)]
pub enum BinOp {
/// The `+` operator (addition)
Add,
/// Like `Add`, but with UB on overflow. (Integers only.)
AddUnchecked,
/// Like `Add`, but returns `(T, bool)` of both the wrapped result
/// and a bool indicating whether it overflowed.
AddWithOverflow,
/// The `-` operator (subtraction)
Sub,
/// Like `Sub`, but with UB on overflow. (Integers only.)
SubUnchecked,
/// Like `Sub`, but returns `(T, bool)` of both the wrapped result
/// and a bool indicating whether it overflowed.
SubWithOverflow,
/// The `*` operator (multiplication)
Mul,
/// Like `Mul`, but with UB on overflow. (Integers only.)
MulUnchecked,
/// Like `Mul`, but returns `(T, bool)` of both the wrapped result
/// and a bool indicating whether it overflowed.
MulWithOverflow,
/// The `/` operator (division)
///
/// For integer types, division by zero is UB, as is `MIN / -1` for signed.
/// The compiler should have inserted checks prior to this.
///
/// Floating-point division by zero is safe, and does not need guards.
Div,
/// The `%` operator (modulus)
///
/// For integer types, using zero as the modulus (second operand) is UB,
/// as is `MIN % -1` for signed.
/// The compiler should have inserted checks prior to this.
///
/// Floating-point remainder by zero is safe, and does not need guards.
Rem,
/// The `^` operator (bitwise xor)
BitXor,
/// The `&` operator (bitwise and)
BitAnd,
/// The `|` operator (bitwise or)
BitOr,
/// The `<<` operator (shift left)
///
/// The offset is given by `RHS.rem_euclid(LHS::BITS)`.
/// In other words, it is (uniquely) determined as follows:
/// - it is "equal modulo LHS::BITS" to the RHS
/// - it is in the range `0..LHS::BITS`
Shl,
/// Like `Shl`, but is UB if the RHS >= LHS::BITS or RHS < 0
ShlUnchecked,
/// The `>>` operator (shift right)
///
/// The offset is given by `RHS.rem_euclid(LHS::BITS)`.
/// In other words, it is (uniquely) determined as follows:
/// - it is "equal modulo LHS::BITS" to the RHS
/// - it is in the range `0..LHS::BITS`
///
/// This is an arithmetic shift if the LHS is signed
/// and a logical shift if the LHS is unsigned.
Shr,
/// Like `Shl`, but is UB if the RHS >= LHS::BITS or RHS < 0
ShrUnchecked,
/// The `==` operator (equality)
Eq,
/// The `<` operator (less than)
Lt,
/// The `<=` operator (less than or equal to)
Le,
/// The `!=` operator (not equal to)
Ne,
/// The `>=` operator (greater than or equal to)
Ge,
/// The `>` operator (greater than)
Gt,
/// The `<=>` operator (three-way comparison, like `Ord::cmp`)
///
/// This is supported only on the integer types and `char`, always returning
/// [`rustc_hir::LangItem::OrderingEnum`] (aka [`std::cmp::Ordering`]).
///
/// [`Rvalue::BinaryOp`]`(BinOp::Cmp, A, B)` returns
/// - `Ordering::Less` (`-1_i8`, as a Scalar) if `A < B`
/// - `Ordering::Equal` (`0_i8`, as a Scalar) if `A == B`
/// - `Ordering::Greater` (`+1_i8`, as a Scalar) if `A > B`
Cmp,
/// The `ptr.offset` operator
Offset,
}
// Assignment operators, e.g. `+=`. See comments on the corresponding variants
// in `BinOp` for details.
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, HashStable)]
pub enum AssignOp {
AddAssign,
SubAssign,
MulAssign,
DivAssign,
RemAssign,
BitXorAssign,
BitAndAssign,
BitOrAssign,
ShlAssign,
ShrAssign,
}
// Sometimes `BinOp` and `AssignOp` need the same treatment. The operations
// covered by `AssignOp` are a subset of those covered by `BinOp`, so it makes
// sense to convert `AssignOp` to `BinOp`.
impl From<AssignOp> for BinOp {
fn from(op: AssignOp) -> BinOp {
match op {
AssignOp::AddAssign => BinOp::Add,
AssignOp::SubAssign => BinOp::Sub,
AssignOp::MulAssign => BinOp::Mul,
AssignOp::DivAssign => BinOp::Div,
AssignOp::RemAssign => BinOp::Rem,
AssignOp::BitXorAssign => BinOp::BitXor,
AssignOp::BitAndAssign => BinOp::BitAnd,
AssignOp::BitOrAssign => BinOp::BitOr,
AssignOp::ShlAssign => BinOp::Shl,
AssignOp::ShrAssign => BinOp::Shr,
}
}
}
// Some nodes are used a lot. Make sure they don't unintentionally get bigger.
#[cfg(target_pointer_width = "64")]
mod size_asserts {
use rustc_data_structures::static_assert_size;
use super::*;
// tidy-alphabetical-start
static_assert_size!(AggregateKind<'_>, 32);
static_assert_size!(Operand<'_>, 24);
static_assert_size!(Place<'_>, 16);
static_assert_size!(PlaceElem<'_>, 24);
static_assert_size!(Rvalue<'_>, 40);
static_assert_size!(StatementKind<'_>, 16);
static_assert_size!(TerminatorKind<'_>, 80);
// tidy-alphabetical-end
}