Commit aef3e534f5

Andrew Kelley <andrew@ziglang.org>
2021-03-16 07:38:38
stage2: *WIP*: rework ZIR memory layout; overhaul source locations
The memory layout for ZIR instructions is completely reworked. See zir.zig for those changes. Some new types: * `zir.Code`: a "finished" set of ZIR instructions. Instead of allocating each instruction independently, there is now a Tag and 8 bytes of data available for all ZIR instructions. Small instructions fit within these 8 bytes; larger ones use 4 bytes for an index into `extra`. There is also `string_bytes` so that we can have 4 byte references to strings. `zir.Inst.Tag` describes how to interpret those 8 bytes of data. - This is shared by all `Block` scopes. * `Module.WipZirCode`: represents an in-progress `zir.Code`. In this structure, the arrays are mutable, and get resized as we add/delete things. There is extra state to keep track of things. This struct is stored on the stack. Once it is finished, it produces an immutable `zir.Code`, which will remain on the heap for the duration of a function's existence. - This is shared by all `GenZir` scopes. * `Sema`: represents in-progress semantic analysis of a `zir.Code`. This data is stored on the stack and is shared among all `Block` scopes. It is now the main "self" argument to everything in the file that was previously named `zir_sema.zig`. Additionally, I moved some logic that was in `Module` into here. `Module.Fn` now stores its parameter names inside the `zir.Code`, instead of inside ZIR instructions. When the TZIR memory layout reworking time comes, codegen will be able to reference this data directly instead of duplicating it. astgen.zig is (so far) almost entirely untouched, but nearly all of it will need to be reworked to adhere to this new memory layout structure. I have no benchmarks to report yet, as I am still working through compile errors and fixing various things that I broke in this branch. Overhaul of Source Locations: Previously we used `usize` everywhere to mean byte offset, but sometimes also mean other stuff. This was error prone and also made us do unnecessary work, and store unnecessary bytes in memory. Now there are more types involved into source locations, and more ways to describe a source location. * AllErrors.Message: embrace the assumption that files always have less than 2 << 32 bytes. * SrcLoc gets more complicated, to model more complicated source locations. * Introduce LazySrcLoc, which can model interesting source locations with very little stored state. Useful for avoiding doing unnecessary work when no compile errors occur. Also, previously, we had `src: usize` on every ZIR instruction. This is no longer the case. Each instruction now determines whether it even cares about source location, and if so, how that source location is stored. This requires more careful work inside `Sema`, but it results in fewer bytes stored on the heap, without compromising accuracy and power of compile error messages. Miscellaneous: * std.zig: string literals have more helpful result values for reporting errors. There is now a lower level API and a higher level API. - side note: I noticed that the string literal logic needs some love. There is some unnecessarily hacky code there. * cut & pasted some TZIR logic that was in zir.zig to ir.zig. This probably broke stuff and needs to get fixed. * Removed type/Enum.zig, type/Union.zig, and type/Struct.zig. I don't think this quite how this code will be organized. Need some more careful planning about how to implement structs, unions, enums. They need to be independent Decls, just like a top level function.
1 parent f16f250
lib/std/zig/string_literal.zig
@@ -6,112 +6,143 @@
 const std = @import("../std.zig");
 const assert = std.debug.assert;
 
-const State = enum {
-    Start,
-    Backslash,
-};
-
 pub const ParseError = error{
     OutOfMemory,
+    InvalidStringLiteral,
+};
 
-    /// When this is returned, index will be the position of the character.
-    InvalidCharacter,
+pub const Result = union(enum) {
+    success,
+    /// Found an invalid character at this index.
+    invalid_character: usize,
+    /// Expected hex digits at this index.
+    expected_hex_digits: usize,
+    /// Invalid hex digits at this index.
+    invalid_hex_escape: usize,
+    /// Invalid unicode escape at this index.
+    invalid_unicode_escape: usize,
+    /// The left brace at this index is missing a matching right brace.
+    missing_matching_brace: usize,
+    /// Expected unicode digits at this index.
+    expected_unicode_digits: usize,
 };
 
-/// caller owns returned memory
-pub fn parse(
-    allocator: *std.mem.Allocator,
-    bytes: []const u8,
-    bad_index: *usize, // populated if error.InvalidCharacter is returned
-) ParseError![]u8 {
+/// Parses `bytes` as a Zig string literal and appends the result to `buf`.
+/// Asserts `bytes` has '"' at beginning and end.
+pub fn parseAppend(buf: *std.ArrayList(u8), bytes: []const u8) error{OutOfMemory}!Result {
     assert(bytes.len >= 2 and bytes[0] == '"' and bytes[bytes.len - 1] == '"');
+    const slice = bytes[1..];
 
-    var list = std.ArrayList(u8).init(allocator);
-    errdefer list.deinit();
+    const prev_len = buf.items.len;
+    try buf.ensureCapacity(prev_len + slice.len - 1);
+    errdefer buf.shrinkRetainingCapacity(prev_len);
 
-    const slice = bytes[1..];
-    try list.ensureCapacity(slice.len - 1);
+    const State = enum {
+        Start,
+        Backslash,
+    };
 
     var state = State.Start;
     var index: usize = 0;
-    while (index < slice.len) : (index += 1) {
+    while (true) : (index += 1) {
         const b = slice[index];
 
         switch (state) {
             State.Start => switch (b) {
                 '\\' => state = State.Backslash,
                 '\n' => {
-                    bad_index.* = index;
-                    return error.InvalidCharacter;
+                    return Result{ .invalid_character = index };
                 },
-                '"' => return list.toOwnedSlice(),
-                else => try list.append(b),
+                '"' => return Result.success,
+                else => try buf.append(b),
             },
             State.Backslash => switch (b) {
                 'n' => {
-                    try list.append('\n');
+                    try buf.append('\n');
                     state = State.Start;
                 },
                 'r' => {
-                    try list.append('\r');
+                    try buf.append('\r');
                     state = State.Start;
                 },
                 '\\' => {
-                    try list.append('\\');
+                    try buf.append('\\');
                     state = State.Start;
                 },
                 't' => {
-                    try list.append('\t');
+                    try buf.append('\t');
                     state = State.Start;
                 },
                 '\'' => {
-                    try list.append('\'');
+                    try buf.append('\'');
                     state = State.Start;
                 },
                 '"' => {
-                    try list.append('"');
+                    try buf.append('"');
                     state = State.Start;
                 },
                 'x' => {
                     // TODO: add more/better/broader tests for this.
                     const index_continue = index + 3;
-                    if (slice.len >= index_continue)
-                        if (std.fmt.parseUnsigned(u8, slice[index + 1 .. index_continue], 16)) |char| {
-                            try list.append(char);
-                            state = State.Start;
-                            index = index_continue - 1; // loop-header increments again
-                            continue;
-                        } else |_| {};
-
-                    bad_index.* = index;
-                    return error.InvalidCharacter;
+                    if (slice.len < index_continue) {
+                        return Result{ .expected_hex_digits = index };
+                    }
+                    if (std.fmt.parseUnsigned(u8, slice[index + 1 .. index_continue], 16)) |byte| {
+                        try buf.append(byte);
+                        state = State.Start;
+                        index = index_continue - 1; // loop-header increments again
+                    } else |err| switch (err) {
+                        error.Overflow => unreachable, // 2 digits base 16 fits in a u8.
+                        error.InvalidCharacter => {
+                            return Result{ .invalid_hex_escape = index + 1 };
+                        },
+                    }
                 },
                 'u' => {
                     // TODO: add more/better/broader tests for this.
-                    if (slice.len > index + 2 and slice[index + 1] == '{')
+                    // TODO: we are already inside a nice, clean state machine... use it
+                    // instead of this hacky code.
+                    if (slice.len > index + 2 and slice[index + 1] == '{') {
                         if (std.mem.indexOfScalarPos(u8, slice[0..std.math.min(index + 9, slice.len)], index + 3, '}')) |index_end| {
                             const hex_str = slice[index + 2 .. index_end];
                             if (std.fmt.parseUnsigned(u32, hex_str, 16)) |uint| {
                                 if (uint <= 0x10ffff) {
-                                    try list.appendSlice(std.mem.toBytes(uint)[0..]);
+                                    try buf.appendSlice(std.mem.toBytes(uint)[0..]);
                                     state = State.Start;
                                     index = index_end; // loop-header increments
                                     continue;
                                 }
-                            } else |_| {}
-                        };
-
-                    bad_index.* = index;
-                    return error.InvalidCharacter;
+                            } else |err| switch (err) {
+                                error.Overflow => unreachable,
+                                error.InvalidCharacter => {
+                                    return Result{ .invalid_unicode_escape = index + 1 };
+                                },
+                            }
+                        } else {
+                            return Result{ .missing_matching_rbrace = index + 1 };
+                        }
+                    } else {
+                        return Result{ .expected_unicode_digits = index };
+                    }
                 },
                 else => {
-                    bad_index.* = index;
-                    return error.InvalidCharacter;
+                    return Result{ .invalid_character = index };
                 },
             },
         }
+    } else unreachable; // TODO should not need else unreachable on while(true)
+}
+
+/// Higher level API. Does not return extra info about parse errors.
+/// Caller owns returned memory.
+pub fn parseAlloc(allocator: *std.mem.Allocator, bytes: []const u8) ParseError![]u8 {
+    var buf = std.ArrayList(u8).init(allocator);
+    defer buf.deinit();
+
+    switch (try parseAppend(&buf, bytes)) {
+        .success => return buf.toOwnedSlice(),
+        else => return error.InvalidStringLiteral,
     }
-    unreachable;
 }
 
 test "parse" {
@@ -121,9 +152,8 @@ test "parse" {
     var fixed_buf_mem: [32]u8 = undefined;
     var fixed_buf_alloc = std.heap.FixedBufferAllocator.init(fixed_buf_mem[0..]);
     var alloc = &fixed_buf_alloc.allocator;
-    var bad_index: usize = undefined;
 
-    expect(eql(u8, "foo", try parse(alloc, "\"foo\"", &bad_index)));
-    expect(eql(u8, "foo", try parse(alloc, "\"f\x6f\x6f\"", &bad_index)));
-    expect(eql(u8, "f💯", try parse(alloc, "\"f\u{1f4af}\"", &bad_index)));
+    expect(eql(u8, "foo", try parseAlloc(alloc, "\"foo\"")));
+    expect(eql(u8, "foo", try parseAlloc(alloc, "\"f\x6f\x6f\"")));
+    expect(eql(u8, "f💯", try parseAlloc(alloc, "\"f\u{1f4af}\"")));
 }
lib/std/zig.zig
@@ -11,7 +11,7 @@ pub const Tokenizer = tokenizer.Tokenizer;
 pub const fmtId = @import("zig/fmt.zig").fmtId;
 pub const fmtEscapes = @import("zig/fmt.zig").fmtEscapes;
 pub const parse = @import("zig/parse.zig").parse;
-pub const parseStringLiteral = @import("zig/string_literal.zig").parse;
+pub const string_literal = @import("zig/string_literal.zig");
 pub const ast = @import("zig/ast.zig");
 pub const system = @import("zig/system.zig");
 pub const CrossTarget = @import("zig/cross_target.zig").CrossTarget;
src/type/Enum.zig
@@ -1,55 +0,0 @@
-const std = @import("std");
-const zir = @import("../zir.zig");
-const Value = @import("../value.zig").Value;
-const Type = @import("../type.zig").Type;
-const Module = @import("../Module.zig");
-const Scope = Module.Scope;
-const Enum = @This();
-
-base: Type.Payload = .{ .tag = .@"enum" },
-
-analysis: union(enum) {
-    queued: Zir,
-    in_progress,
-    resolved: Size,
-    failed,
-},
-scope: Scope.Container,
-
-pub const Field = struct {
-    value: Value,
-};
-
-pub const Zir = struct {
-    body: zir.Body,
-    inst: *zir.Inst,
-};
-
-pub const Size = struct {
-    tag_type: Type,
-    fields: std.StringArrayHashMapUnmanaged(Field),
-};
-
-pub fn resolve(self: *Enum, mod: *Module, scope: *Scope) !void {
-    const zir = switch (self.analysis) {
-        .failed => return error.AnalysisFail,
-        .resolved => return,
-        .in_progress => {
-            return mod.fail(scope, src, "enum '{}' depends on itself", .{enum_name});
-        },
-        .queued => |zir| zir,
-    };
-    self.analysis = .in_progress;
-
-    // TODO
-}
-
-// TODO should this resolve the type or assert that it has already been resolved?
-pub fn abiAlignment(self: *Enum, target: std.Target) u32 {
-    switch (self.analysis) {
-        .queued => unreachable, // alignment has not been resolved
-        .in_progress => unreachable, // alignment has not been resolved
-        .failed => unreachable, // type resolution failed
-        .resolved => |r| return r.tag_type.abiAlignment(target),
-    }
-}
src/type/Struct.zig
@@ -1,56 +0,0 @@
-const std = @import("std");
-const zir = @import("../zir.zig");
-const Value = @import("../value.zig").Value;
-const Type = @import("../type.zig").Type;
-const Module = @import("../Module.zig");
-const Scope = Module.Scope;
-const Struct = @This();
-
-base: Type.Payload = .{ .tag = .@"struct" },
-
-analysis: union(enum) {
-    queued: Zir,
-    zero_bits_in_progress,
-    zero_bits: Zero,
-    in_progress,
-    // alignment: Align,
-    resolved: Size,
-    failed,
-},
-scope: Scope.Container,
-
-pub const Field = struct {
-    value: Value,
-};
-
-pub const Zir = struct {
-    body: zir.Body,
-    inst: *zir.Inst,
-};
-
-pub const Zero = struct {
-    is_zero_bits: bool,
-    fields: std.StringArrayHashMapUnmanaged(Field),
-};
-
-pub const Size = struct {
-    is_zero_bits: bool,
-    alignment: u32,
-    size: u32,
-    fields: std.StringArrayHashMapUnmanaged(Field),
-};
-
-pub fn resolveZeroBits(self: *Struct, mod: *Module, scope: *Scope) !void {
-    const zir = switch (self.analysis) {
-        .failed => return error.AnalysisFail,
-        .zero_bits_in_progress => {
-            return mod.fail(scope, src, "struct '{}' depends on itself", .{});
-        },
-        .queued => |zir| zir,
-        else => return,
-    };
-
-    self.analysis = .zero_bits_in_progress;
-
-    // TODO
-}
src/type/Union.zig
@@ -1,56 +0,0 @@
-const std = @import("std");
-const zir = @import("../zir.zig");
-const Value = @import("../value.zig").Value;
-const Type = @import("../type.zig").Type;
-const Module = @import("../Module.zig");
-const Scope = Module.Scope;
-const Union = @This();
-
-base: Type.Payload = .{ .tag = .@"struct" },
-
-analysis: union(enum) {
-    queued: Zir,
-    zero_bits_in_progress,
-    zero_bits: Zero,
-    in_progress,
-    // alignment: Align,
-    resolved: Size,
-    failed,
-},
-scope: Scope.Container,
-
-pub const Field = struct {
-    value: Value,
-};
-
-pub const Zir = struct {
-    body: zir.Body,
-    inst: *zir.Inst,
-};
-
-pub const Zero = struct {
-    is_zero_bits: bool,
-    fields: std.StringArrayHashMapUnmanaged(Field),
-};
-
-pub const Size = struct {
-    is_zero_bits: bool,
-    alignment: u32,
-    size: u32,
-    fields: std.StringArrayHashMapUnmanaged(Field),
-};
-
-pub fn resolveZeroBits(self: *Union, mod: *Module, scope: *Scope) !void {
-    const zir = switch (self.analysis) {
-        .failed => return error.AnalysisFail,
-        .zero_bits_in_progress => {
-            return mod.fail(scope, src, "union '{}' depends on itself", .{});
-        },
-        .queued => |zir| zir,
-        else => return,
-    };
-
-    self.analysis = .zero_bits_in_progress;
-
-    // TODO
-}
src/astgen.zig
@@ -25,21 +25,22 @@ pub const ResultLoc = union(enum) {
     /// of an assignment uses this kind of result location.
     ref,
     /// The expression will be coerced into this type, but it will be evaluated as an rvalue.
-    ty: *zir.Inst,
+    ty: zir.Inst.Index,
     /// The expression must store its result into this typed pointer. The result instruction
     /// from the expression must be ignored.
-    ptr: *zir.Inst,
+    ptr: zir.Inst.Index,
     /// The expression must store its result into this allocation, which has an inferred type.
     /// The result instruction from the expression must be ignored.
-    inferred_ptr: *zir.Inst.Tag.alloc_inferred.Type(),
+    /// Always an instruction with tag `alloc_inferred`.
+    inferred_ptr: zir.Inst.Index,
     /// The expression must store its result into this pointer, which is a typed pointer that
     /// has been bitcasted to whatever the expression's type is.
     /// The result instruction from the expression must be ignored.
-    bitcasted_ptr: *zir.Inst.UnOp,
+    bitcasted_ptr: zir.Inst.Index,
     /// There is a pointer for the expression to store its result into, however, its type
     /// is inferred based on peer type resolution for a `zir.Inst.Block`.
     /// The result instruction from the expression must be ignored.
-    block_ptr: *Module.Scope.GenZIR,
+    block_ptr: *Module.Scope.GenZir,
 
     pub const Strategy = struct {
         elide_store_to_block_ptr_instructions: bool,
@@ -369,10 +370,10 @@ pub fn expr(mod: *Module, scope: *Scope, rl: ResultLoc, node: ast.Node.Index) In
 
         .call_one, .call_one_comma, .async_call_one, .async_call_one_comma => {
             var params: [1]ast.Node.Index = undefined;
-            return callExpr(mod, scope, rl, tree.callOne(&params, node));
+            return callExpr(mod, scope, rl, node, tree.callOne(&params, node));
         },
         .call, .call_comma, .async_call, .async_call_comma => {
-            return callExpr(mod, scope, rl, tree.callFull(node));
+            return callExpr(mod, scope, rl, node, tree.callFull(node));
         },
 
         .unreachable_literal => {
@@ -487,9 +488,12 @@ pub fn expr(mod: *Module, scope: *Scope, rl: ResultLoc, node: ast.Node.Index) In
         },
         .enum_literal => {
             const ident_token = main_tokens[node];
-            const name = try mod.identifierTokenString(scope, ident_token);
-            const src = token_starts[ident_token];
-            const result = try addZIRInst(mod, scope, src, zir.Inst.EnumLiteral, .{ .name = name }, .{});
+            const gen_zir = scope.getGenZir();
+            const string_bytes = &gen_zir.zir_exec.string_bytes;
+            const str_index = string_bytes.items.len;
+            try mod.appendIdentStr(scope, ident_token, string_bytes);
+            const str_len = string_bytes.items.len - str_index;
+            const result = try gen_zir.addStr(.enum_literal, str_index, str_len);
             return rvalue(mod, scope, rl, result);
         },
         .error_value => {
@@ -679,7 +683,7 @@ pub fn comptimeExpr(
     const token_starts = tree.tokens.items(.start);
 
     // Make a scope to collect generated instructions in the sub-expression.
-    var block_scope: Scope.GenZIR = .{
+    var block_scope: Scope.GenZir = .{
         .parent = parent_scope,
         .decl = parent_scope.ownerDecl().?,
         .arena = parent_scope.arena(),
@@ -720,7 +724,7 @@ fn breakExpr(
     while (true) {
         switch (scope.tag) {
             .gen_zir => {
-                const gen_zir = scope.cast(Scope.GenZIR).?;
+                const gen_zir = scope.cast(Scope.GenZir).?;
 
                 const block_inst = blk: {
                     if (break_label != 0) {
@@ -755,7 +759,7 @@ fn breakExpr(
                     try gen_zir.labeled_breaks.append(mod.gpa, br.castTag(.@"break").?);
 
                     if (have_store_to_block) {
-                        const inst_list = parent_scope.getGenZIR().instructions.items;
+                        const inst_list = parent_scope.getGenZir().instructions.items;
                         const last_inst = inst_list[inst_list.len - 2];
                         const store_inst = last_inst.castTag(.store_to_block_ptr).?;
                         assert(store_inst.positionals.lhs == gen_zir.rl_ptr.?);
@@ -797,7 +801,7 @@ fn continueExpr(
     while (true) {
         switch (scope.tag) {
             .gen_zir => {
-                const gen_zir = scope.cast(Scope.GenZIR).?;
+                const gen_zir = scope.cast(Scope.GenZir).?;
                 const continue_block = gen_zir.continue_block orelse {
                     scope = gen_zir.parent;
                     continue;
@@ -864,7 +868,7 @@ fn checkLabelRedefinition(mod: *Module, parent_scope: *Scope, label: ast.TokenIn
     while (true) {
         switch (scope.tag) {
             .gen_zir => {
-                const gen_zir = scope.cast(Scope.GenZIR).?;
+                const gen_zir = scope.cast(Scope.GenZir).?;
                 if (gen_zir.label) |prev_label| {
                     if (try tokenIdentEql(mod, parent_scope, label, prev_label.token)) {
                         const tree = parent_scope.tree();
@@ -931,9 +935,9 @@ fn labeledBlockExpr(
 
     try checkLabelRedefinition(mod, parent_scope, label_token);
 
-    // Create the Block ZIR instruction so that we can put it into the GenZIR struct
+    // Create the Block ZIR instruction so that we can put it into the GenZir struct
     // so that break statements can reference it.
-    const gen_zir = parent_scope.getGenZIR();
+    const gen_zir = parent_scope.getGenZir();
     const block_inst = try gen_zir.arena.create(zir.Inst.Block);
     block_inst.* = .{
         .base = .{
@@ -946,14 +950,14 @@ fn labeledBlockExpr(
         .kw_args = .{},
     };
 
-    var block_scope: Scope.GenZIR = .{
+    var block_scope: Scope.GenZir = .{
         .parent = parent_scope,
         .decl = parent_scope.ownerDecl().?,
         .arena = gen_zir.arena,
         .force_comptime = parent_scope.isComptime(),
         .instructions = .{},
         // TODO @as here is working around a stage1 miscompilation bug :(
-        .label = @as(?Scope.GenZIR.Label, Scope.GenZIR.Label{
+        .label = @as(?Scope.GenZir.Label, Scope.GenZir.Label{
             .token = label_token,
             .block_inst = block_inst,
         }),
@@ -1107,8 +1111,8 @@ fn varDecl(
                 }
                 s = local_ptr.parent;
             },
-            .gen_zir => s = s.cast(Scope.GenZIR).?.parent,
-            .gen_suspend => s = s.cast(Scope.GenZIR).?.parent,
+            .gen_zir => s = s.cast(Scope.GenZir).?.parent,
+            .gen_suspend => s = s.cast(Scope.GenZir).?.parent,
             .gen_nosuspend => s = s.cast(Scope.Nosuspend).?.parent,
             else => break,
         };
@@ -1137,7 +1141,7 @@ fn varDecl(
                 const sub_scope = try block_arena.create(Scope.LocalVal);
                 sub_scope.* = .{
                     .parent = scope,
-                    .gen_zir = scope.getGenZIR(),
+                    .gen_zir = scope.getGenZir(),
                     .name = ident_name,
                     .inst = init_inst,
                 };
@@ -1146,7 +1150,7 @@ fn varDecl(
 
             // Detect whether the initialization expression actually uses the
             // result location pointer.
-            var init_scope: Scope.GenZIR = .{
+            var init_scope: Scope.GenZir = .{
                 .parent = scope,
                 .decl = scope.ownerDecl().?,
                 .arena = scope.arena(),
@@ -1168,7 +1172,7 @@ fn varDecl(
             }
             const init_result_loc: ResultLoc = .{ .block_ptr = &init_scope };
             const init_inst = try expr(mod, &init_scope.base, init_result_loc, var_decl.ast.init_node);
-            const parent_zir = &scope.getGenZIR().instructions;
+            const parent_zir = &scope.getGenZir().instructions;
             if (init_scope.rvalue_rl_count == 1) {
                 // Result location pointer not used. We don't need an alloc for this
                 // const local, and type inference becomes trivial.
@@ -1192,7 +1196,7 @@ fn varDecl(
                 const sub_scope = try block_arena.create(Scope.LocalVal);
                 sub_scope.* = .{
                     .parent = scope,
-                    .gen_zir = scope.getGenZIR(),
+                    .gen_zir = scope.getGenZir(),
                     .name = ident_name,
                     .inst = casted_init,
                 };
@@ -1219,7 +1223,7 @@ fn varDecl(
             const sub_scope = try block_arena.create(Scope.LocalPtr);
             sub_scope.* = .{
                 .parent = scope,
-                .gen_zir = scope.getGenZIR(),
+                .gen_zir = scope.getGenZir(),
                 .name = ident_name,
                 .ptr = init_scope.rl_ptr.?,
             };
@@ -1246,7 +1250,7 @@ fn varDecl(
             const sub_scope = try block_arena.create(Scope.LocalPtr);
             sub_scope.* = .{
                 .parent = scope,
-                .gen_zir = scope.getGenZIR(),
+                .gen_zir = scope.getGenZir(),
                 .name = ident_name,
                 .ptr = var_data.alloc,
             };
@@ -1446,203 +1450,13 @@ fn arrayTypeSentinel(mod: *Module, scope: *Scope, rl: ResultLoc, node: ast.Node.
     return rvalue(mod, scope, rl, result);
 }
 
-fn containerField(
-    mod: *Module,
-    scope: *Scope,
-    field: ast.full.ContainerField,
-) InnerError!*zir.Inst {
-    const tree = scope.tree();
-    const token_starts = tree.tokens.items(.start);
-
-    const src = token_starts[field.ast.name_token];
-    const name = try mod.identifierTokenString(scope, field.ast.name_token);
-
-    if (field.comptime_token == null and field.ast.value_expr == 0 and field.ast.align_expr == 0) {
-        if (field.ast.type_expr != 0) {
-            const ty = try typeExpr(mod, scope, field.ast.type_expr);
-            return addZIRInst(mod, scope, src, zir.Inst.ContainerFieldTyped, .{
-                .bytes = name,
-                .ty = ty,
-            }, .{});
-        } else {
-            return addZIRInst(mod, scope, src, zir.Inst.ContainerFieldNamed, .{
-                .bytes = name,
-            }, .{});
-        }
-    }
-
-    const ty = if (field.ast.type_expr != 0) try typeExpr(mod, scope, field.ast.type_expr) else null;
-    // TODO result location should be alignment type
-    const alignment = if (field.ast.align_expr != 0) try expr(mod, scope, .none, field.ast.align_expr) else null;
-    // TODO result location should be the field type
-    const init = if (field.ast.value_expr != 0) try expr(mod, scope, .none, field.ast.value_expr) else null;
-
-    return addZIRInst(mod, scope, src, zir.Inst.ContainerField, .{
-        .bytes = name,
-    }, .{
-        .ty = ty,
-        .init = init,
-        .alignment = alignment,
-        .is_comptime = field.comptime_token != null,
-    });
-}
-
 fn containerDecl(
     mod: *Module,
     scope: *Scope,
     rl: ResultLoc,
     container_decl: ast.full.ContainerDecl,
 ) InnerError!*zir.Inst {
-    const tree = scope.tree();
-    const token_starts = tree.tokens.items(.start);
-    const node_tags = tree.nodes.items(.tag);
-    const token_tags = tree.tokens.items(.tag);
-
-    const src = token_starts[container_decl.ast.main_token];
-
-    var gen_scope: Scope.GenZIR = .{
-        .parent = scope,
-        .decl = scope.ownerDecl().?,
-        .arena = scope.arena(),
-        .force_comptime = scope.isComptime(),
-        .instructions = .{},
-    };
-    defer gen_scope.instructions.deinit(mod.gpa);
-
-    var fields = std.ArrayList(*zir.Inst).init(mod.gpa);
-    defer fields.deinit();
-
-    for (container_decl.ast.members) |member| {
-        // TODO just handle these cases differently since they end up with different ZIR
-        // instructions anyway. It will be simpler & have fewer branches.
-        const field = switch (node_tags[member]) {
-            .container_field_init => try containerField(mod, &gen_scope.base, tree.containerFieldInit(member)),
-            .container_field_align => try containerField(mod, &gen_scope.base, tree.containerFieldAlign(member)),
-            .container_field => try containerField(mod, &gen_scope.base, tree.containerField(member)),
-            else => continue,
-        };
-        try fields.append(field);
-    }
-
-    var decl_arena = std.heap.ArenaAllocator.init(mod.gpa);
-    errdefer decl_arena.deinit();
-    const arena = &decl_arena.allocator;
-
-    var layout: std.builtin.TypeInfo.ContainerLayout = .Auto;
-    if (container_decl.layout_token) |some| switch (token_tags[some]) {
-        .keyword_extern => layout = .Extern,
-        .keyword_packed => layout = .Packed,
-        else => unreachable,
-    };
-
-    // TODO this implementation is incorrect. The types must be created in semantic
-    // analysis, not astgen, because the same ZIR is re-used for multiple inline function calls,
-    // comptime function calls, and generic function instantiations, and these
-    // must result in different instances of container types.
-    const container_type = switch (token_tags[container_decl.ast.main_token]) {
-        .keyword_enum => blk: {
-            const tag_type: ?*zir.Inst = if (container_decl.ast.arg != 0)
-                try typeExpr(mod, &gen_scope.base, container_decl.ast.arg)
-            else
-                null;
-            const inst = try addZIRInst(mod, &gen_scope.base, src, zir.Inst.EnumType, .{
-                .fields = try arena.dupe(*zir.Inst, fields.items),
-            }, .{
-                .layout = layout,
-                .tag_type = tag_type,
-            });
-            const enum_type = try arena.create(Type.Payload.Enum);
-            enum_type.* = .{
-                .analysis = .{
-                    .queued = .{
-                        .body = .{ .instructions = try arena.dupe(*zir.Inst, gen_scope.instructions.items) },
-                        .inst = inst,
-                    },
-                },
-                .scope = .{
-                    .file_scope = scope.getFileScope(),
-                    .ty = Type.initPayload(&enum_type.base),
-                },
-            };
-            break :blk Type.initPayload(&enum_type.base);
-        },
-        .keyword_struct => blk: {
-            assert(container_decl.ast.arg == 0);
-            const inst = try addZIRInst(mod, &gen_scope.base, src, zir.Inst.StructType, .{
-                .fields = try arena.dupe(*zir.Inst, fields.items),
-            }, .{
-                .layout = layout,
-            });
-            const struct_type = try arena.create(Type.Payload.Struct);
-            struct_type.* = .{
-                .analysis = .{
-                    .queued = .{
-                        .body = .{ .instructions = try arena.dupe(*zir.Inst, gen_scope.instructions.items) },
-                        .inst = inst,
-                    },
-                },
-                .scope = .{
-                    .file_scope = scope.getFileScope(),
-                    .ty = Type.initPayload(&struct_type.base),
-                },
-            };
-            break :blk Type.initPayload(&struct_type.base);
-        },
-        .keyword_union => blk: {
-            const init_inst: ?*zir.Inst = if (container_decl.ast.arg != 0)
-                try typeExpr(mod, &gen_scope.base, container_decl.ast.arg)
-            else
-                null;
-            const has_enum_token = container_decl.ast.enum_token != null;
-            const inst = try addZIRInst(mod, &gen_scope.base, src, zir.Inst.UnionType, .{
-                .fields = try arena.dupe(*zir.Inst, fields.items),
-            }, .{
-                .layout = layout,
-                .has_enum_token = has_enum_token,
-                .init_inst = init_inst,
-            });
-            const union_type = try arena.create(Type.Payload.Union);
-            union_type.* = .{
-                .analysis = .{
-                    .queued = .{
-                        .body = .{ .instructions = try arena.dupe(*zir.Inst, gen_scope.instructions.items) },
-                        .inst = inst,
-                    },
-                },
-                .scope = .{
-                    .file_scope = scope.getFileScope(),
-                    .ty = Type.initPayload(&union_type.base),
-                },
-            };
-            break :blk Type.initPayload(&union_type.base);
-        },
-        .keyword_opaque => blk: {
-            if (fields.items.len > 0) {
-                return mod.fail(scope, fields.items[0].src, "opaque types cannot have fields", .{});
-            }
-            const opaque_type = try arena.create(Type.Payload.Opaque);
-            opaque_type.* = .{
-                .scope = .{
-                    .file_scope = scope.getFileScope(),
-                    .ty = Type.initPayload(&opaque_type.base),
-                },
-            };
-            break :blk Type.initPayload(&opaque_type.base);
-        },
-        else => unreachable,
-    };
-    const val = try Value.Tag.ty.create(arena, container_type);
-    const decl = try mod.createContainerDecl(scope, container_decl.ast.main_token, &decl_arena, .{
-        .ty = Type.initTag(.type),
-        .val = val,
-    });
-    if (rl == .ref) {
-        return addZIRInst(mod, scope, src, zir.Inst.DeclRef, .{ .decl = decl }, .{});
-    } else {
-        return rvalue(mod, scope, rl, try addZIRInst(mod, scope, src, zir.Inst.DeclVal, .{
-            .decl = decl,
-        }, .{}));
-    }
+    return mod.failTok(scope, container_decl.ast.main_token, "TODO implement container decls", .{});
 }
 
 fn errorSetDecl(
@@ -1709,7 +1523,7 @@ fn orelseCatchExpr(
 
     const src = token_starts[op_token];
 
-    var block_scope: Scope.GenZIR = .{
+    var block_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = scope.ownerDecl().?,
         .arena = scope.arena(),
@@ -1738,7 +1552,7 @@ fn orelseCatchExpr(
         .instructions = try block_scope.arena.dupe(*zir.Inst, block_scope.instructions.items),
     });
 
-    var then_scope: Scope.GenZIR = .{
+    var then_scope: Scope.GenZir = .{
         .parent = &block_scope.base,
         .decl = block_scope.decl,
         .arena = block_scope.arena,
@@ -1766,7 +1580,7 @@ fn orelseCatchExpr(
     block_scope.break_count += 1;
     const then_result = try expr(mod, then_sub_scope, block_scope.break_result_loc, rhs);
 
-    var else_scope: Scope.GenZIR = .{
+    var else_scope: Scope.GenZir = .{
         .parent = &block_scope.base,
         .decl = block_scope.decl,
         .arena = block_scope.arena,
@@ -1804,9 +1618,9 @@ fn finishThenElseBlock(
     mod: *Module,
     parent_scope: *Scope,
     rl: ResultLoc,
-    block_scope: *Scope.GenZIR,
-    then_scope: *Scope.GenZIR,
-    else_scope: *Scope.GenZIR,
+    block_scope: *Scope.GenZir,
+    then_scope: *Scope.GenZir,
+    else_scope: *Scope.GenZir,
     then_body: *zir.Body,
     else_body: *zir.Body,
     then_src: usize,
@@ -2023,7 +1837,7 @@ fn boolBinOp(
         .val = Value.initTag(.bool_type),
     });
 
-    var block_scope: Scope.GenZIR = .{
+    var block_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = scope.ownerDecl().?,
         .arena = scope.arena(),
@@ -2043,7 +1857,7 @@ fn boolBinOp(
         .instructions = try block_scope.arena.dupe(*zir.Inst, block_scope.instructions.items),
     });
 
-    var rhs_scope: Scope.GenZIR = .{
+    var rhs_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = block_scope.decl,
         .arena = block_scope.arena,
@@ -2058,7 +1872,7 @@ fn boolBinOp(
         .operand = rhs,
     }, .{});
 
-    var const_scope: Scope.GenZIR = .{
+    var const_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = block_scope.decl,
         .arena = block_scope.arena,
@@ -2100,7 +1914,7 @@ fn ifExpr(
     rl: ResultLoc,
     if_full: ast.full.If,
 ) InnerError!*zir.Inst {
-    var block_scope: Scope.GenZIR = .{
+    var block_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = scope.ownerDecl().?,
         .arena = scope.arena(),
@@ -2142,7 +1956,7 @@ fn ifExpr(
     });
 
     const then_src = token_starts[tree.lastToken(if_full.ast.then_expr)];
-    var then_scope: Scope.GenZIR = .{
+    var then_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = block_scope.decl,
         .arena = block_scope.arena,
@@ -2160,7 +1974,7 @@ fn ifExpr(
     // instructions into place until we know whether to keep store_to_block_ptr
     // instructions or not.
 
-    var else_scope: Scope.GenZIR = .{
+    var else_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = block_scope.decl,
         .arena = block_scope.arena,
@@ -2201,7 +2015,7 @@ fn ifExpr(
 }
 
 /// Expects to find exactly 1 .store_to_block_ptr instruction.
-fn copyBodyWithElidedStoreBlockPtr(body: *zir.Body, scope: Module.Scope.GenZIR) !void {
+fn copyBodyWithElidedStoreBlockPtr(body: *zir.Body, scope: Module.Scope.GenZir) !void {
     body.* = .{
         .instructions = try scope.arena.alloc(*zir.Inst, scope.instructions.items.len - 1),
     };
@@ -2215,7 +2029,7 @@ fn copyBodyWithElidedStoreBlockPtr(body: *zir.Body, scope: Module.Scope.GenZIR)
     assert(dst_index == body.instructions.len);
 }
 
-fn copyBodyNoEliding(body: *zir.Body, scope: Module.Scope.GenZIR) !void {
+fn copyBodyNoEliding(body: *zir.Body, scope: Module.Scope.GenZir) !void {
     body.* = .{
         .instructions = try scope.arena.dupe(*zir.Inst, scope.instructions.items),
     };
@@ -2234,7 +2048,7 @@ fn whileExpr(
         return mod.failTok(scope, inline_token, "TODO inline while", .{});
     }
 
-    var loop_scope: Scope.GenZIR = .{
+    var loop_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = scope.ownerDecl().?,
         .arena = scope.arena(),
@@ -2244,7 +2058,7 @@ fn whileExpr(
     setBlockResultLoc(&loop_scope, rl);
     defer loop_scope.instructions.deinit(mod.gpa);
 
-    var continue_scope: Scope.GenZIR = .{
+    var continue_scope: Scope.GenZir = .{
         .parent = &loop_scope.base,
         .decl = loop_scope.decl,
         .arena = loop_scope.arena,
@@ -2311,14 +2125,14 @@ fn whileExpr(
     loop_scope.break_block = while_block;
     loop_scope.continue_block = cond_block;
     if (while_full.label_token) |label_token| {
-        loop_scope.label = @as(?Scope.GenZIR.Label, Scope.GenZIR.Label{
+        loop_scope.label = @as(?Scope.GenZir.Label, Scope.GenZir.Label{
             .token = label_token,
             .block_inst = while_block,
         });
     }
 
     const then_src = token_starts[tree.lastToken(while_full.ast.then_expr)];
-    var then_scope: Scope.GenZIR = .{
+    var then_scope: Scope.GenZir = .{
         .parent = &continue_scope.base,
         .decl = continue_scope.decl,
         .arena = continue_scope.arena,
@@ -2332,7 +2146,7 @@ fn whileExpr(
     loop_scope.break_count += 1;
     const then_result = try expr(mod, then_sub_scope, loop_scope.break_result_loc, while_full.ast.then_expr);
 
-    var else_scope: Scope.GenZIR = .{
+    var else_scope: Scope.GenZir = .{
         .parent = &continue_scope.base,
         .decl = continue_scope.decl,
         .arena = continue_scope.arena,
@@ -2416,7 +2230,7 @@ fn forExpr(
     const cond_src = token_starts[tree.firstToken(for_full.ast.cond_expr)];
     const len = try addZIRUnOp(mod, scope, cond_src, .indexable_ptr_len, array_ptr);
 
-    var loop_scope: Scope.GenZIR = .{
+    var loop_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = scope.ownerDecl().?,
         .arena = scope.arena(),
@@ -2426,7 +2240,7 @@ fn forExpr(
     setBlockResultLoc(&loop_scope, rl);
     defer loop_scope.instructions.deinit(mod.gpa);
 
-    var cond_scope: Scope.GenZIR = .{
+    var cond_scope: Scope.GenZir = .{
         .parent = &loop_scope.base,
         .decl = loop_scope.decl,
         .arena = loop_scope.arena,
@@ -2476,7 +2290,7 @@ fn forExpr(
     loop_scope.break_block = for_block;
     loop_scope.continue_block = cond_block;
     if (for_full.label_token) |label_token| {
-        loop_scope.label = @as(?Scope.GenZIR.Label, Scope.GenZIR.Label{
+        loop_scope.label = @as(?Scope.GenZir.Label, Scope.GenZir.Label{
             .token = label_token,
             .block_inst = for_block,
         });
@@ -2484,7 +2298,7 @@ fn forExpr(
 
     // while body
     const then_src = token_starts[tree.lastToken(for_full.ast.then_expr)];
-    var then_scope: Scope.GenZIR = .{
+    var then_scope: Scope.GenZir = .{
         .parent = &cond_scope.base,
         .decl = cond_scope.decl,
         .arena = cond_scope.arena,
@@ -2529,7 +2343,7 @@ fn forExpr(
     const then_result = try expr(mod, then_sub_scope, loop_scope.break_result_loc, for_full.ast.then_expr);
 
     // else branch
-    var else_scope: Scope.GenZIR = .{
+    var else_scope: Scope.GenZir = .{
         .parent = &cond_scope.base,
         .decl = cond_scope.decl,
         .arena = cond_scope.arena,
@@ -2609,7 +2423,7 @@ fn switchExpr(
 
     const switch_src = token_starts[switch_token];
 
-    var block_scope: Scope.GenZIR = .{
+    var block_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = scope.ownerDecl().?,
         .arena = scope.arena(),
@@ -2748,7 +2562,7 @@ fn switchExpr(
         .instructions = try block_scope.arena.dupe(*zir.Inst, block_scope.instructions.items),
     });
 
-    var case_scope: Scope.GenZIR = .{
+    var case_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = block_scope.decl,
         .arena = block_scope.arena,
@@ -2757,7 +2571,7 @@ fn switchExpr(
     };
     defer case_scope.instructions.deinit(mod.gpa);
 
-    var else_scope: Scope.GenZIR = .{
+    var else_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = case_scope.decl,
         .arena = case_scope.arena,
@@ -2966,12 +2780,8 @@ fn identifier(
         return mod.failNode(scope, ident, "TODO implement '_' identifier", .{});
     }
 
-    if (simple_types.get(ident_name)) |val_tag| {
-        const result = try addZIRInstConst(mod, scope, src, TypedValue{
-            .ty = Type.initTag(.type),
-            .val = Value.initTag(val_tag),
-        });
-        return rvalue(mod, scope, rl, result);
+    if (simple_types.get(ident_name)) |zir_const_tag| {
+        return rvalue(mod, scope, rl, @enumToInt(zir_const_tag));
     }
 
     if (ident_name.len >= 2) integer: {
@@ -3030,8 +2840,8 @@ fn identifier(
                 }
                 s = local_ptr.parent;
             },
-            .gen_zir => s = s.cast(Scope.GenZIR).?.parent,
-            .gen_suspend => s = s.cast(Scope.GenZIR).?.parent,
+            .gen_zir => s = s.cast(Scope.GenZir).?.parent,
+            .gen_suspend => s = s.cast(Scope.GenZir).?.parent,
             .gen_nosuspend => s = s.cast(Scope.Nosuspend).?.parent,
             else => break,
         };
@@ -3166,33 +2976,16 @@ fn integerLiteral(
     rl: ResultLoc,
     int_lit: ast.Node.Index,
 ) InnerError!*zir.Inst {
-    const arena = scope.arena();
     const tree = scope.tree();
     const main_tokens = tree.nodes.items(.main_token);
-    const token_starts = tree.tokens.items(.start);
-
     const int_token = main_tokens[int_lit];
     const prefixed_bytes = tree.tokenSlice(int_token);
-    const base: u8 = if (mem.startsWith(u8, prefixed_bytes, "0x"))
-        16
-    else if (mem.startsWith(u8, prefixed_bytes, "0o"))
-        8
-    else if (mem.startsWith(u8, prefixed_bytes, "0b"))
-        2
-    else
-        @as(u8, 10);
-
-    const bytes = if (base == 10)
-        prefixed_bytes
-    else
-        prefixed_bytes[2..];
-
-    if (std.fmt.parseInt(u64, bytes, base)) |small_int| {
-        const src = token_starts[int_token];
-        const result = try addZIRInstConst(mod, scope, src, .{
-            .ty = Type.initTag(.comptime_int),
-            .val = try Value.Tag.int_u64.create(arena, small_int),
-        });
+    if (std.fmt.parseInt(u64, prefixed_bytes, 0)) |small_int| {
+        const result: zir.Inst.Index = switch (small_int) {
+            0 => @enumToInt(zir.Const.zero),
+            1 => @enumToInt(zir.Const.one),
+            else => try addZirInt(small_int),
+        };
         return rvalue(mod, scope, rl, result);
     } else |err| {
         return mod.failTok(scope, int_token, "TODO implement int literals that don't fit in a u64", .{});
@@ -3316,7 +3109,7 @@ fn asRlPtr(
     // Detect whether this expr() call goes into rvalue() to store the result into the
     // result location. If it does, elide the coerce_result_ptr instruction
     // as well as the store instruction, instead passing the result as an rvalue.
-    var as_scope: Scope.GenZIR = .{
+    var as_scope: Scope.GenZir = .{
         .parent = scope,
         .decl = scope.ownerDecl().?,
         .arena = scope.arena(),
@@ -3327,7 +3120,7 @@ fn asRlPtr(
 
     as_scope.rl_ptr = try addZIRBinOp(mod, &as_scope.base, src, .coerce_result_ptr, dest_type, result_ptr);
     const result = try expr(mod, &as_scope.base, .{ .block_ptr = &as_scope }, operand_node);
-    const parent_zir = &scope.getGenZIR().instructions;
+    const parent_zir = &scope.getGenZir().instructions;
     if (as_scope.rvalue_rl_count == 1) {
         // Busted! This expression didn't actually need a pointer.
         const expected_len = parent_zir.items.len + as_scope.instructions.items.len - 2;
@@ -3622,39 +3415,47 @@ fn callExpr(
     mod: *Module,
     scope: *Scope,
     rl: ResultLoc,
+    node: ast.Node.Index,
     call: ast.full.Call,
 ) InnerError!*zir.Inst {
     if (call.async_token) |async_token| {
         return mod.failTok(scope, async_token, "TODO implement async fn call", .{});
     }
-
-    const tree = scope.tree();
-    const main_tokens = tree.nodes.items(.main_token);
-    const token_starts = tree.tokens.items(.start);
-
     const lhs = try expr(mod, scope, .none, call.ast.fn_expr);
 
-    const args = try scope.getGenZIR().arena.alloc(*zir.Inst, call.ast.params.len);
+    const args = try mod.gpa.alloc(zir.Inst.Index, call.ast.params.len);
+    defer mod.gpa.free(args);
+
+    const gen_zir = scope.getGenZir();
     for (call.ast.params) |param_node, i| {
-        const param_src = token_starts[tree.firstToken(param_node)];
-        const param_type = try addZIRInst(mod, scope, param_src, zir.Inst.ParamType, .{
-            .func = lhs,
-            .arg_index = i,
-        }, .{});
+        const param_type = try gen_zir.addParamType(.{
+            .callee = lhs,
+            .param_index = i,
+        });
         args[i] = try expr(mod, scope, .{ .ty = param_type }, param_node);
     }
 
-    const src = token_starts[call.ast.lparen];
-    var modifier: std.builtin.CallOptions.Modifier = .auto;
-    if (call.async_token) |_| modifier = .async_kw;
-
-    const result = try addZIRInst(mod, scope, src, zir.Inst.Call, .{
-        .func = lhs,
-        .args = args,
-        .modifier = modifier,
-    }, .{});
-    // TODO function call with result location
-    return rvalue(mod, scope, rl, result);
+    const modifier: std.builtin.CallOptions.Modifier = switch (call.async_token != null) {
+        true => .async_kw,
+        false => .auto,
+    };
+    const result: zir.Inst.Index = res: {
+        const tag: zir.Inst.Tag = switch (modifier) {
+            .auto => switch (args.len == 0) {
+                true => break :res try gen_zir.addCallNone(lhs, node),
+                false => .call,
+            },
+            .async_kw => .call_async_kw,
+            .never_tail => unreachable,
+            .never_inline => unreachable,
+            .no_async => .call_no_async,
+            .always_tail => unreachable,
+            .always_inline => unreachable,
+            .compile_time => .call_compile_time,
+        };
+        break :res try gen_zir.addCall(tag, lhs, args, node);
+    };
+    return rvalue(mod, scope, rl, result); // TODO function call with result location
 }
 
 fn suspendExpr(mod: *Module, scope: *Scope, node: ast.Node.Index) InnerError!*zir.Inst {
@@ -3748,11 +3549,17 @@ fn resumeExpr(mod: *Module, scope: *Scope, node: ast.Node.Index) InnerError!*zir
     return addZIRUnOp(mod, scope, src, .@"resume", operand);
 }
 
-pub const simple_types = std.ComptimeStringMap(Value.Tag, .{
+pub const simple_types = std.ComptimeStringMap(zir.Const, .{
     .{ "u8", .u8_type },
     .{ "i8", .i8_type },
-    .{ "isize", .isize_type },
+    .{ "u16", .u16_type },
+    .{ "i16", .i16_type },
+    .{ "u32", .u32_type },
+    .{ "i32", .i32_type },
+    .{ "u64", .u64_type },
+    .{ "i64", .i64_type },
     .{ "usize", .usize_type },
+    .{ "isize", .isize_type },
     .{ "c_short", .c_short_type },
     .{ "c_ushort", .c_ushort_type },
     .{ "c_int", .c_int_type },
@@ -3774,6 +3581,13 @@ pub const simple_types = std.ComptimeStringMap(Value.Tag, .{
     .{ "comptime_int", .comptime_int_type },
     .{ "comptime_float", .comptime_float_type },
     .{ "noreturn", .noreturn_type },
+    .{ "null", .null_type },
+    .{ "undefined", .undefined_type },
+    .{ "anyframe", .anyframe_type },
+    .{ "undefined", .undef },
+    .{ "null", .null_value },
+    .{ "true", .bool_true },
+    .{ "false", .bool_false },
 });
 
 fn nodeMayNeedMemoryLocation(scope: *Scope, start_node: ast.Node.Index) bool {
@@ -4045,7 +3859,7 @@ fn rvalueVoid(
     return rvalue(mod, scope, rl, void_inst);
 }
 
-fn rlStrategy(rl: ResultLoc, block_scope: *Scope.GenZIR) ResultLoc.Strategy {
+fn rlStrategy(rl: ResultLoc, block_scope: *Scope.GenZir) ResultLoc.Strategy {
     var elide_store_to_block_ptr_instructions = false;
     switch (rl) {
         // In this branch there will not be any store_to_block_ptr instructions.
@@ -4099,7 +3913,7 @@ fn makeOptionalTypeResultLoc(mod: *Module, scope: *Scope, src: usize, rl: Result
     }
 }
 
-fn setBlockResultLoc(block_scope: *Scope.GenZIR, parent_rl: ResultLoc) void {
+fn setBlockResultLoc(block_scope: *Scope.GenZir, parent_rl: ResultLoc) void {
     // Depending on whether the result location is a pointer or value, different
     // ZIR needs to be generated. In the former case we rely on storing to the
     // pointer to communicate the result, and use breakvoid; in the latter case
@@ -4137,7 +3951,7 @@ pub fn addZirInstTag(
     comptime tag: zir.Inst.Tag,
     positionals: std.meta.fieldInfo(tag.Type(), .positionals).field_type,
 ) !*zir.Inst {
-    const gen_zir = scope.getGenZIR();
+    const gen_zir = scope.getGenZir();
     try gen_zir.instructions.ensureCapacity(mod.gpa, gen_zir.instructions.items.len + 1);
     const inst = try gen_zir.arena.create(tag.Type());
     inst.* = .{
@@ -4160,7 +3974,7 @@ pub fn addZirInstT(
     tag: zir.Inst.Tag,
     positionals: std.meta.fieldInfo(T, .positionals).field_type,
 ) !*T {
-    const gen_zir = scope.getGenZIR();
+    const gen_zir = scope.getGenZir();
     try gen_zir.instructions.ensureCapacity(mod.gpa, gen_zir.instructions.items.len + 1);
     const inst = try gen_zir.arena.create(T);
     inst.* = .{
@@ -4183,7 +3997,7 @@ pub fn addZIRInstSpecial(
     positionals: std.meta.fieldInfo(T, .positionals).field_type,
     kw_args: std.meta.fieldInfo(T, .kw_args).field_type,
 ) !*T {
-    const gen_zir = scope.getGenZIR();
+    const gen_zir = scope.getGenZir();
     try gen_zir.instructions.ensureCapacity(mod.gpa, gen_zir.instructions.items.len + 1);
     const inst = try gen_zir.arena.create(T);
     inst.* = .{
@@ -4199,7 +4013,7 @@ pub fn addZIRInstSpecial(
 }
 
 pub fn addZIRNoOpT(mod: *Module, scope: *Scope, src: usize, tag: zir.Inst.Tag) !*zir.Inst.NoOp {
-    const gen_zir = scope.getGenZIR();
+    const gen_zir = scope.getGenZir();
     try gen_zir.instructions.ensureCapacity(mod.gpa, gen_zir.instructions.items.len + 1);
     const inst = try gen_zir.arena.create(zir.Inst.NoOp);
     inst.* = .{
@@ -4226,7 +4040,7 @@ pub fn addZIRUnOp(
     tag: zir.Inst.Tag,
     operand: *zir.Inst,
 ) !*zir.Inst {
-    const gen_zir = scope.getGenZIR();
+    const gen_zir = scope.getGenZir();
     try gen_zir.instructions.ensureCapacity(mod.gpa, gen_zir.instructions.items.len + 1);
     const inst = try gen_zir.arena.create(zir.Inst.UnOp);
     inst.* = .{
@@ -4251,7 +4065,7 @@ pub fn addZIRBinOp(
     lhs: *zir.Inst,
     rhs: *zir.Inst,
 ) !*zir.Inst {
-    const gen_zir = scope.getGenZIR();
+    const gen_zir = scope.getGenZir();
     try gen_zir.instructions.ensureCapacity(mod.gpa, gen_zir.instructions.items.len + 1);
     const inst = try gen_zir.arena.create(zir.Inst.BinOp);
     inst.* = .{
@@ -4276,7 +4090,7 @@ pub fn addZIRInstBlock(
     tag: zir.Inst.Tag,
     body: zir.Body,
 ) !*zir.Inst.Block {
-    const gen_zir = scope.getGenZIR();
+    const gen_zir = scope.getGenZir();
     try gen_zir.instructions.ensureCapacity(mod.gpa, gen_zir.instructions.items.len + 1);
     const inst = try gen_zir.arena.create(zir.Inst.Block);
     inst.* = .{
src/Compilation.zig
@@ -259,7 +259,7 @@ pub const CObject = struct {
 /// To support incremental compilation, errors are stored in various places
 /// so that they can be created and destroyed appropriately. This structure
 /// is used to collect all the errors from the various places into one
-/// convenient place for API users to consume. It is allocated into 1 heap
+/// convenient place for API users to consume. It is allocated into 1 arena
 /// and freed all at once.
 pub const AllErrors = struct {
     arena: std.heap.ArenaAllocator.State,
@@ -267,11 +267,11 @@ pub const AllErrors = struct {
 
     pub const Message = union(enum) {
         src: struct {
-            src_path: []const u8,
-            line: usize,
-            column: usize,
-            byte_offset: usize,
             msg: []const u8,
+            src_path: []const u8,
+            line: u32,
+            column: u32,
+            byte_offset: u32,
             notes: []Message = &.{},
         },
         plain: struct {
@@ -316,29 +316,31 @@ pub const AllErrors = struct {
         const notes = try arena.allocator.alloc(Message, module_err_msg.notes.len);
         for (notes) |*note, i| {
             const module_note = module_err_msg.notes[i];
-            const source = try module_note.src_loc.file_scope.getSource(module);
-            const loc = std.zig.findLineColumn(source, module_note.src_loc.byte_offset);
-            const sub_file_path = module_note.src_loc.file_scope.sub_file_path;
+            const source = try module_note.src_loc.fileScope().getSource(module);
+            const byte_offset = try module_note.src_loc.byteOffset(module);
+            const loc = std.zig.findLineColumn(source, byte_offset);
+            const sub_file_path = module_note.src_loc.fileScope().sub_file_path;
             note.* = .{
                 .src = .{
                     .src_path = try arena.allocator.dupe(u8, sub_file_path),
                     .msg = try arena.allocator.dupe(u8, module_note.msg),
-                    .byte_offset = module_note.src_loc.byte_offset,
-                    .line = loc.line,
-                    .column = loc.column,
+                    .byte_offset = byte_offset,
+                    .line = @intCast(u32, loc.line),
+                    .column = @intCast(u32, loc.column),
                 },
             };
         }
-        const source = try module_err_msg.src_loc.file_scope.getSource(module);
-        const loc = std.zig.findLineColumn(source, module_err_msg.src_loc.byte_offset);
-        const sub_file_path = module_err_msg.src_loc.file_scope.sub_file_path;
+        const source = try module_err_msg.src_loc.fileScope().getSource(module);
+        const byte_offset = try module_err_msg.src_loc.byteOffset(module);
+        const loc = std.zig.findLineColumn(source, byte_offset);
+        const sub_file_path = module_err_msg.src_loc.fileScope().sub_file_path;
         try errors.append(.{
             .src = .{
                 .src_path = try arena.allocator.dupe(u8, sub_file_path),
                 .msg = try arena.allocator.dupe(u8, module_err_msg.msg),
-                .byte_offset = module_err_msg.src_loc.byte_offset,
-                .line = loc.line,
-                .column = loc.column,
+                .byte_offset = byte_offset,
+                .line = @intCast(u32, loc.line),
+                .column = @intCast(u32, loc.column),
                 .notes = notes,
             },
         });
src/ir.zig
@@ -360,7 +360,8 @@ pub const Inst = struct {
         base: Inst,
         asm_source: []const u8,
         is_volatile: bool,
-        output: ?[]const u8,
+        output: ?*Inst,
+        output_name: ?[]const u8,
         inputs: []const []const u8,
         clobbers: []const []const u8,
         args: []const *Inst,
@@ -589,3 +590,445 @@ pub const Inst = struct {
 pub const Body = struct {
     instructions: []*Inst,
 };
+
+/// For debugging purposes, prints a function representation to stderr.
+pub fn dumpFn(old_module: IrModule, module_fn: *IrModule.Fn) void {
+    const allocator = old_module.gpa;
+    var ctx: DumpTzir = .{
+        .allocator = allocator,
+        .arena = std.heap.ArenaAllocator.init(allocator),
+        .old_module = &old_module,
+        .module_fn = module_fn,
+        .indent = 2,
+        .inst_table = DumpTzir.InstTable.init(allocator),
+        .partial_inst_table = DumpTzir.InstTable.init(allocator),
+        .const_table = DumpTzir.InstTable.init(allocator),
+    };
+    defer ctx.inst_table.deinit();
+    defer ctx.partial_inst_table.deinit();
+    defer ctx.const_table.deinit();
+    defer ctx.arena.deinit();
+
+    switch (module_fn.state) {
+        .queued => std.debug.print("(queued)", .{}),
+        .inline_only => std.debug.print("(inline_only)", .{}),
+        .in_progress => std.debug.print("(in_progress)", .{}),
+        .sema_failure => std.debug.print("(sema_failure)", .{}),
+        .dependency_failure => std.debug.print("(dependency_failure)", .{}),
+        .success => {
+            const writer = std.io.getStdErr().writer();
+            ctx.dump(module_fn.body, writer) catch @panic("failed to dump TZIR");
+        },
+    }
+}
+
+const DumpTzir = struct {
+    allocator: *Allocator,
+    arena: std.heap.ArenaAllocator,
+    old_module: *const IrModule,
+    module_fn: *IrModule.Fn,
+    indent: usize,
+    inst_table: InstTable,
+    partial_inst_table: InstTable,
+    const_table: InstTable,
+    next_index: usize = 0,
+    next_partial_index: usize = 0,
+    next_const_index: usize = 0,
+
+    const InstTable = std.AutoArrayHashMap(*ir.Inst, usize);
+
+    /// TODO: Improve this code to include a stack of ir.Body and store the instructions
+    /// in there. Now we are putting all the instructions in a function local table,
+    /// however instructions that are in a Body can be thown away when the Body ends.
+    fn dump(dtz: *DumpTzir, body: ir.Body, writer: std.fs.File.Writer) !void {
+        // First pass to pre-populate the table so that we can show even invalid references.
+        // Must iterate the same order we iterate the second time.
+        // We also look for constants and put them in the const_table.
+        try dtz.fetchInstsAndResolveConsts(body);
+
+        std.debug.print("Module.Function(name={s}):\n", .{dtz.module_fn.owner_decl.name});
+
+        for (dtz.const_table.items()) |entry| {
+            const constant = entry.key.castTag(.constant).?;
+            try writer.print("  @{d}: {} = {};\n", .{
+                entry.value, constant.base.ty, constant.val,
+            });
+        }
+
+        return dtz.dumpBody(body, writer);
+    }
+
+    fn fetchInstsAndResolveConsts(dtz: *DumpTzir, body: ir.Body) error{OutOfMemory}!void {
+        for (body.instructions) |inst| {
+            try dtz.inst_table.put(inst, dtz.next_index);
+            dtz.next_index += 1;
+            switch (inst.tag) {
+                .alloc,
+                .retvoid,
+                .unreach,
+                .breakpoint,
+                .dbg_stmt,
+                .arg,
+                => {},
+
+                .ref,
+                .ret,
+                .bitcast,
+                .not,
+                .is_non_null,
+                .is_non_null_ptr,
+                .is_null,
+                .is_null_ptr,
+                .is_err,
+                .is_err_ptr,
+                .ptrtoint,
+                .floatcast,
+                .intcast,
+                .load,
+                .optional_payload,
+                .optional_payload_ptr,
+                .wrap_optional,
+                .wrap_errunion_payload,
+                .wrap_errunion_err,
+                .unwrap_errunion_payload,
+                .unwrap_errunion_err,
+                .unwrap_errunion_payload_ptr,
+                .unwrap_errunion_err_ptr,
+                => {
+                    const un_op = inst.cast(ir.Inst.UnOp).?;
+                    try dtz.findConst(un_op.operand);
+                },
+
+                .add,
+                .sub,
+                .mul,
+                .cmp_lt,
+                .cmp_lte,
+                .cmp_eq,
+                .cmp_gte,
+                .cmp_gt,
+                .cmp_neq,
+                .store,
+                .bool_and,
+                .bool_or,
+                .bit_and,
+                .bit_or,
+                .xor,
+                => {
+                    const bin_op = inst.cast(ir.Inst.BinOp).?;
+                    try dtz.findConst(bin_op.lhs);
+                    try dtz.findConst(bin_op.rhs);
+                },
+
+                .br => {
+                    const br = inst.castTag(.br).?;
+                    try dtz.findConst(&br.block.base);
+                    try dtz.findConst(br.operand);
+                },
+
+                .br_block_flat => {
+                    const br_block_flat = inst.castTag(.br_block_flat).?;
+                    try dtz.findConst(&br_block_flat.block.base);
+                    try dtz.fetchInstsAndResolveConsts(br_block_flat.body);
+                },
+
+                .br_void => {
+                    const br_void = inst.castTag(.br_void).?;
+                    try dtz.findConst(&br_void.block.base);
+                },
+
+                .block => {
+                    const block = inst.castTag(.block).?;
+                    try dtz.fetchInstsAndResolveConsts(block.body);
+                },
+
+                .condbr => {
+                    const condbr = inst.castTag(.condbr).?;
+                    try dtz.findConst(condbr.condition);
+                    try dtz.fetchInstsAndResolveConsts(condbr.then_body);
+                    try dtz.fetchInstsAndResolveConsts(condbr.else_body);
+                },
+
+                .loop => {
+                    const loop = inst.castTag(.loop).?;
+                    try dtz.fetchInstsAndResolveConsts(loop.body);
+                },
+                .call => {
+                    const call = inst.castTag(.call).?;
+                    try dtz.findConst(call.func);
+                    for (call.args) |arg| {
+                        try dtz.findConst(arg);
+                    }
+                },
+
+                // TODO fill out this debug printing
+                .assembly,
+                .constant,
+                .varptr,
+                .switchbr,
+                => {},
+            }
+        }
+    }
+
+    fn dumpBody(dtz: *DumpTzir, body: ir.Body, writer: std.fs.File.Writer) (std.fs.File.WriteError || error{OutOfMemory})!void {
+        for (body.instructions) |inst| {
+            const my_index = dtz.next_partial_index;
+            try dtz.partial_inst_table.put(inst, my_index);
+            dtz.next_partial_index += 1;
+
+            try writer.writeByteNTimes(' ', dtz.indent);
+            try writer.print("%{d}: {} = {s}(", .{
+                my_index, inst.ty, @tagName(inst.tag),
+            });
+            switch (inst.tag) {
+                .alloc,
+                .retvoid,
+                .unreach,
+                .breakpoint,
+                .dbg_stmt,
+                => try writer.writeAll(")\n"),
+
+                .ref,
+                .ret,
+                .bitcast,
+                .not,
+                .is_non_null,
+                .is_null,
+                .is_non_null_ptr,
+                .is_null_ptr,
+                .is_err,
+                .is_err_ptr,
+                .ptrtoint,
+                .floatcast,
+                .intcast,
+                .load,
+                .optional_payload,
+                .optional_payload_ptr,
+                .wrap_optional,
+                .wrap_errunion_err,
+                .wrap_errunion_payload,
+                .unwrap_errunion_err,
+                .unwrap_errunion_payload,
+                .unwrap_errunion_payload_ptr,
+                .unwrap_errunion_err_ptr,
+                => {
+                    const un_op = inst.cast(ir.Inst.UnOp).?;
+                    const kinky = try dtz.writeInst(writer, un_op.operand);
+                    if (kinky != null) {
+                        try writer.writeAll(") // Instruction does not dominate all uses!\n");
+                    } else {
+                        try writer.writeAll(")\n");
+                    }
+                },
+
+                .add,
+                .sub,
+                .mul,
+                .cmp_lt,
+                .cmp_lte,
+                .cmp_eq,
+                .cmp_gte,
+                .cmp_gt,
+                .cmp_neq,
+                .store,
+                .bool_and,
+                .bool_or,
+                .bit_and,
+                .bit_or,
+                .xor,
+                => {
+                    const bin_op = inst.cast(ir.Inst.BinOp).?;
+
+                    const lhs_kinky = try dtz.writeInst(writer, bin_op.lhs);
+                    try writer.writeAll(", ");
+                    const rhs_kinky = try dtz.writeInst(writer, bin_op.rhs);
+
+                    if (lhs_kinky != null or rhs_kinky != null) {
+                        try writer.writeAll(") // Instruction does not dominate all uses!");
+                        if (lhs_kinky) |lhs| {
+                            try writer.print(" %{d}", .{lhs});
+                        }
+                        if (rhs_kinky) |rhs| {
+                            try writer.print(" %{d}", .{rhs});
+                        }
+                        try writer.writeAll("\n");
+                    } else {
+                        try writer.writeAll(")\n");
+                    }
+                },
+
+                .arg => {
+                    const arg = inst.castTag(.arg).?;
+                    try writer.print("{s})\n", .{arg.name});
+                },
+
+                .br => {
+                    const br = inst.castTag(.br).?;
+
+                    const lhs_kinky = try dtz.writeInst(writer, &br.block.base);
+                    try writer.writeAll(", ");
+                    const rhs_kinky = try dtz.writeInst(writer, br.operand);
+
+                    if (lhs_kinky != null or rhs_kinky != null) {
+                        try writer.writeAll(") // Instruction does not dominate all uses!");
+                        if (lhs_kinky) |lhs| {
+                            try writer.print(" %{d}", .{lhs});
+                        }
+                        if (rhs_kinky) |rhs| {
+                            try writer.print(" %{d}", .{rhs});
+                        }
+                        try writer.writeAll("\n");
+                    } else {
+                        try writer.writeAll(")\n");
+                    }
+                },
+
+                .br_block_flat => {
+                    const br_block_flat = inst.castTag(.br_block_flat).?;
+                    const block_kinky = try dtz.writeInst(writer, &br_block_flat.block.base);
+                    if (block_kinky != null) {
+                        try writer.writeAll(", { // Instruction does not dominate all uses!\n");
+                    } else {
+                        try writer.writeAll(", {\n");
+                    }
+
+                    const old_indent = dtz.indent;
+                    dtz.indent += 2;
+                    try dtz.dumpBody(br_block_flat.body, writer);
+                    dtz.indent = old_indent;
+
+                    try writer.writeByteNTimes(' ', dtz.indent);
+                    try writer.writeAll("})\n");
+                },
+
+                .br_void => {
+                    const br_void = inst.castTag(.br_void).?;
+                    const kinky = try dtz.writeInst(writer, &br_void.block.base);
+                    if (kinky) |_| {
+                        try writer.writeAll(") // Instruction does not dominate all uses!\n");
+                    } else {
+                        try writer.writeAll(")\n");
+                    }
+                },
+
+                .block => {
+                    const block = inst.castTag(.block).?;
+
+                    try writer.writeAll("{\n");
+
+                    const old_indent = dtz.indent;
+                    dtz.indent += 2;
+                    try dtz.dumpBody(block.body, writer);
+                    dtz.indent = old_indent;
+
+                    try writer.writeByteNTimes(' ', dtz.indent);
+                    try writer.writeAll("})\n");
+                },
+
+                .condbr => {
+                    const condbr = inst.castTag(.condbr).?;
+
+                    const condition_kinky = try dtz.writeInst(writer, condbr.condition);
+                    if (condition_kinky != null) {
+                        try writer.writeAll(", { // Instruction does not dominate all uses!\n");
+                    } else {
+                        try writer.writeAll(", {\n");
+                    }
+
+                    const old_indent = dtz.indent;
+                    dtz.indent += 2;
+                    try dtz.dumpBody(condbr.then_body, writer);
+
+                    try writer.writeByteNTimes(' ', old_indent);
+                    try writer.writeAll("}, {\n");
+
+                    try dtz.dumpBody(condbr.else_body, writer);
+                    dtz.indent = old_indent;
+
+                    try writer.writeByteNTimes(' ', old_indent);
+                    try writer.writeAll("})\n");
+                },
+
+                .loop => {
+                    const loop = inst.castTag(.loop).?;
+
+                    try writer.writeAll("{\n");
+
+                    const old_indent = dtz.indent;
+                    dtz.indent += 2;
+                    try dtz.dumpBody(loop.body, writer);
+                    dtz.indent = old_indent;
+
+                    try writer.writeByteNTimes(' ', dtz.indent);
+                    try writer.writeAll("})\n");
+                },
+
+                .call => {
+                    const call = inst.castTag(.call).?;
+
+                    const args_kinky = try dtz.allocator.alloc(?usize, call.args.len);
+                    defer dtz.allocator.free(args_kinky);
+                    std.mem.set(?usize, args_kinky, null);
+                    var any_kinky_args = false;
+
+                    const func_kinky = try dtz.writeInst(writer, call.func);
+
+                    for (call.args) |arg, i| {
+                        try writer.writeAll(", ");
+
+                        args_kinky[i] = try dtz.writeInst(writer, arg);
+                        any_kinky_args = any_kinky_args or args_kinky[i] != null;
+                    }
+
+                    if (func_kinky != null or any_kinky_args) {
+                        try writer.writeAll(") // Instruction does not dominate all uses!");
+                        if (func_kinky) |func_index| {
+                            try writer.print(" %{d}", .{func_index});
+                        }
+                        for (args_kinky) |arg_kinky| {
+                            if (arg_kinky) |arg_index| {
+                                try writer.print(" %{d}", .{arg_index});
+                            }
+                        }
+                        try writer.writeAll("\n");
+                    } else {
+                        try writer.writeAll(")\n");
+                    }
+                },
+
+                // TODO fill out this debug printing
+                .assembly,
+                .constant,
+                .varptr,
+                .switchbr,
+                => {
+                    try writer.writeAll("!TODO!)\n");
+                },
+            }
+        }
+    }
+
+    fn writeInst(dtz: *DumpTzir, writer: std.fs.File.Writer, inst: *ir.Inst) !?usize {
+        if (dtz.partial_inst_table.get(inst)) |operand_index| {
+            try writer.print("%{d}", .{operand_index});
+            return null;
+        } else if (dtz.const_table.get(inst)) |operand_index| {
+            try writer.print("@{d}", .{operand_index});
+            return null;
+        } else if (dtz.inst_table.get(inst)) |operand_index| {
+            try writer.print("%{d}", .{operand_index});
+            return operand_index;
+        } else {
+            try writer.writeAll("!BADREF!");
+            return null;
+        }
+    }
+
+    fn findConst(dtz: *DumpTzir, operand: *ir.Inst) !void {
+        if (operand.tag == .constant) {
+            try dtz.const_table.put(operand, dtz.next_const_index);
+            dtz.next_const_index += 1;
+        }
+    }
+};
src/Module.zig
@@ -1,31 +1,32 @@
-const Module = @This();
+//! Compilation of all Zig source code is represented by one `Module`.
+//! Each `Compilation` has exactly one or zero `Module`, depending on whether
+//! there is or is not any zig source code, respectively.
+
 const std = @import("std");
-const Compilation = @import("Compilation.zig");
 const mem = std.mem;
 const Allocator = std.mem.Allocator;
 const ArrayListUnmanaged = std.ArrayListUnmanaged;
-const Value = @import("value.zig").Value;
-const Type = @import("type.zig").Type;
-const TypedValue = @import("TypedValue.zig");
 const assert = std.debug.assert;
 const log = std.log.scoped(.module);
 const BigIntConst = std.math.big.int.Const;
 const BigIntMutable = std.math.big.int.Mutable;
 const Target = std.Target;
+const ast = std.zig.ast;
+
+const Module = @This();
+const Compilation = @import("Compilation.zig");
+const Value = @import("value.zig").Value;
+const Type = @import("type.zig").Type;
+const TypedValue = @import("TypedValue.zig");
 const Package = @import("Package.zig");
 const link = @import("link.zig");
 const ir = @import("ir.zig");
 const zir = @import("zir.zig");
-const Inst = ir.Inst;
-const Body = ir.Body;
-const ast = std.zig.ast;
 const trace = @import("tracy.zig").trace;
 const astgen = @import("astgen.zig");
-const zir_sema = @import("zir_sema.zig");
+const Sema = @import("zir_sema.zig"); // TODO rename this file
 const target_util = @import("target.zig");
 
-const default_eval_branch_quota = 1000;
-
 /// General-purpose allocator. Used for both temporary and long-term storage.
 gpa: *Allocator,
 comp: *Compilation,
@@ -106,8 +107,7 @@ compile_log_text: std.ArrayListUnmanaged(u8) = .{},
 
 pub const Export = struct {
     options: std.builtin.ExportOptions,
-    /// Byte offset into the file that contains the export directive.
-    src: usize,
+    src: LazySrcLoc,
     /// Represents the position of the export, if any, in the output file.
     link: link.File.Export,
     /// The Decl that performs the export. Note that this is *not* the Decl being exported.
@@ -132,11 +132,12 @@ pub const DeclPlusEmitH = struct {
 };
 
 pub const Decl = struct {
-    /// This name is relative to the containing namespace of the decl. It uses a null-termination
-    /// to save bytes, since there can be a lot of decls in a compilation. The null byte is not allowed
-    /// in symbol names, because executable file formats use null-terminated strings for symbol names.
-    /// All Decls have names, even values that are not bound to a zig namespace. This is necessary for
-    /// mapping them to an address in the output file.
+    /// This name is relative to the containing namespace of the decl. It uses
+    /// null-termination to save bytes, since there can be a lot of decls in a
+    /// compilation. The null byte is not allowed in symbol names, because
+    /// executable file formats use null-terminated strings for symbol names.
+    /// All Decls have names, even values that are not bound to a zig namespace.
+    /// This is necessary for mapping them to an address in the output file.
     /// Memory owned by this decl, using Module's allocator.
     name: [*:0]const u8,
     /// The direct parent container of the Decl.
@@ -219,73 +220,82 @@ pub const Decl = struct {
     /// stage1 compiler giving me: `error: struct 'Module.Decl' depends on itself`
     pub const DepsTable = std.ArrayHashMapUnmanaged(*Decl, void, std.array_hash_map.getAutoHashFn(*Decl), std.array_hash_map.getAutoEqlFn(*Decl), false);
 
-    pub fn destroy(self: *Decl, module: *Module) void {
+    pub fn destroy(decl: *Decl, module: *Module) void {
         const gpa = module.gpa;
-        gpa.free(mem.spanZ(self.name));
-        if (self.typedValueManaged()) |tvm| {
+        gpa.free(mem.spanZ(decl.name));
+        if (decl.typedValueManaged()) |tvm| {
             tvm.deinit(gpa);
         }
-        self.dependants.deinit(gpa);
-        self.dependencies.deinit(gpa);
+        decl.dependants.deinit(gpa);
+        decl.dependencies.deinit(gpa);
         if (module.emit_h != null) {
-            const decl_plus_emit_h = @fieldParentPtr(DeclPlusEmitH, "decl", self);
+            const decl_plus_emit_h = @fieldParentPtr(DeclPlusEmitH, "decl", decl);
             decl_plus_emit_h.emit_h.fwd_decl.deinit(gpa);
             gpa.destroy(decl_plus_emit_h);
         } else {
-            gpa.destroy(self);
+            gpa.destroy(decl);
         }
     }
 
-    pub fn srcLoc(self: Decl) SrcLoc {
+    pub fn srcLoc(decl: *const Decl) SrcLoc {
         return .{
-            .byte_offset = self.src(),
-            .file_scope = self.getFileScope(),
+            .decl = decl,
+            .byte_offset = 0,
         };
     }
 
-    pub fn src(self: Decl) usize {
-        const tree = &self.container.file_scope.tree;
-        const decl_node = tree.rootDecls()[self.src_index];
-        return tree.tokens.items(.start)[tree.firstToken(decl_node)];
+    pub fn srcNode(decl: Decl) u32 {
+        const tree = &decl.container.file_scope.tree;
+        return tree.rootDecls()[decl.src_index];
+    }
+
+    pub fn srcToken(decl: Decl) u32 {
+        const tree = &decl.container.file_scope.tree;
+        return tree.firstToken(decl.srcNode());
+    }
+
+    pub fn srcByteOffset(decl: Decl) u32 {
+        const tree = &decl.container.file_scope.tree;
+        return tree.tokens.items(.start)[decl.srcToken()];
     }
 
-    pub fn fullyQualifiedNameHash(self: Decl) Scope.NameHash {
-        return self.container.fullyQualifiedNameHash(mem.spanZ(self.name));
+    pub fn fullyQualifiedNameHash(decl: Decl) Scope.NameHash {
+        return decl.container.fullyQualifiedNameHash(mem.spanZ(decl.name));
     }
 
-    pub fn typedValue(self: *Decl) error{AnalysisFail}!TypedValue {
-        const tvm = self.typedValueManaged() orelse return error.AnalysisFail;
+    pub fn typedValue(decl: *Decl) error{AnalysisFail}!TypedValue {
+        const tvm = decl.typedValueManaged() orelse return error.AnalysisFail;
         return tvm.typed_value;
     }
 
-    pub fn value(self: *Decl) error{AnalysisFail}!Value {
-        return (try self.typedValue()).val;
+    pub fn value(decl: *Decl) error{AnalysisFail}!Value {
+        return (try decl.typedValue()).val;
     }
 
-    pub fn dump(self: *Decl) void {
-        const loc = std.zig.findLineColumn(self.scope.source.bytes, self.src);
+    pub fn dump(decl: *Decl) void {
+        const loc = std.zig.findLineColumn(decl.scope.source.bytes, decl.src);
         std.debug.print("{s}:{d}:{d} name={s} status={s}", .{
-            self.scope.sub_file_path,
+            decl.scope.sub_file_path,
             loc.line + 1,
             loc.column + 1,
-            mem.spanZ(self.name),
-            @tagName(self.analysis),
+            mem.spanZ(decl.name),
+            @tagName(decl.analysis),
         });
-        if (self.typedValueManaged()) |tvm| {
+        if (decl.typedValueManaged()) |tvm| {
             std.debug.print(" ty={} val={}", .{ tvm.typed_value.ty, tvm.typed_value.val });
         }
         std.debug.print("\n", .{});
     }
 
-    pub fn typedValueManaged(self: *Decl) ?*TypedValue.Managed {
-        switch (self.typed_value) {
+    pub fn typedValueManaged(decl: *Decl) ?*TypedValue.Managed {
+        switch (decl.typed_value) {
             .most_recent => |*x| return x,
             .never_succeeded => return null,
         }
     }
 
-    pub fn getFileScope(self: Decl) *Scope.File {
-        return self.container.file_scope;
+    pub fn getFileScope(decl: Decl) *Scope.File {
+        return decl.container.file_scope;
     }
 
     pub fn getEmitH(decl: *Decl, module: *Module) *EmitH {
@@ -294,12 +304,12 @@ pub const Decl = struct {
         return &decl_plus_emit_h.emit_h;
     }
 
-    fn removeDependant(self: *Decl, other: *Decl) void {
-        self.dependants.removeAssertDiscard(other);
+    fn removeDependant(decl: *Decl, other: *Decl) void {
+        decl.dependants.removeAssertDiscard(other);
     }
 
-    fn removeDependency(self: *Decl, other: *Decl) void {
-        self.dependencies.removeAssertDiscard(other);
+    fn removeDependency(decl: *Decl, other: *Decl) void {
+        decl.dependencies.removeAssertDiscard(other);
     }
 };
 
@@ -316,9 +326,14 @@ pub const Fn = struct {
     /// Contains un-analyzed ZIR instructions generated from Zig source AST.
     /// Even after we finish analysis, the ZIR is kept in memory, so that
     /// comptime and inline function calls can happen.
-    zir: zir.Body,
+    /// Parameter names are stored here so that they may be referenced for debug info,
+    /// without having source code bytes loaded into memory.
+    /// The number of parameters is determined by referring to the type.
+    /// The first N elements of `extra` are indexes into `string_bytes` to
+    /// a null-terminated string.
+    zir: zir.Code,
     /// undefined unless analysis state is `success`.
-    body: Body,
+    body: ir.Body,
     state: Analysis,
 
     pub const Analysis = enum {
@@ -336,8 +351,8 @@ pub const Fn = struct {
     };
 
     /// For debugging purposes.
-    pub fn dump(self: *Fn, mod: Module) void {
-        zir.dumpFn(mod, self);
+    pub fn dump(func: *Fn, mod: Module) void {
+        zir.dumpFn(mod, func);
     }
 };
 
@@ -364,68 +379,68 @@ pub const Scope = struct {
     }
 
     /// Returns the arena Allocator associated with the Decl of the Scope.
-    pub fn arena(self: *Scope) *Allocator {
-        switch (self.tag) {
-            .block => return self.cast(Block).?.arena,
-            .gen_zir => return self.cast(GenZIR).?.arena,
-            .local_val => return self.cast(LocalVal).?.gen_zir.arena,
-            .local_ptr => return self.cast(LocalPtr).?.gen_zir.arena,
-            .gen_suspend => return self.cast(GenZIR).?.arena,
-            .gen_nosuspend => return self.cast(Nosuspend).?.gen_zir.arena,
+    pub fn arena(scope: *Scope) *Allocator {
+        switch (scope.tag) {
+            .block => return scope.cast(Block).?.arena,
+            .gen_zir => return scope.cast(GenZir).?.arena,
+            .local_val => return scope.cast(LocalVal).?.gen_zir.arena,
+            .local_ptr => return scope.cast(LocalPtr).?.gen_zir.arena,
+            .gen_suspend => return scope.cast(GenZir).?.arena,
+            .gen_nosuspend => return scope.cast(Nosuspend).?.gen_zir.arena,
             .file => unreachable,
             .container => unreachable,
         }
     }
 
-    pub fn isComptime(self: *Scope) bool {
-        return self.getGenZIR().force_comptime;
+    pub fn isComptime(scope: *Scope) bool {
+        return scope.getGenZir().force_comptime;
     }
 
-    pub fn ownerDecl(self: *Scope) ?*Decl {
-        return switch (self.tag) {
-            .block => self.cast(Block).?.owner_decl,
-            .gen_zir => self.cast(GenZIR).?.decl,
-            .local_val => self.cast(LocalVal).?.gen_zir.decl,
-            .local_ptr => self.cast(LocalPtr).?.gen_zir.decl,
-            .gen_suspend => return self.cast(GenZIR).?.decl,
-            .gen_nosuspend => return self.cast(Nosuspend).?.gen_zir.decl,
+    pub fn ownerDecl(scope: *Scope) ?*Decl {
+        return switch (scope.tag) {
+            .block => scope.cast(Block).?.owner_decl,
+            .gen_zir => scope.cast(GenZir).?.zir_code.decl,
+            .local_val => scope.cast(LocalVal).?.gen_zir.decl,
+            .local_ptr => scope.cast(LocalPtr).?.gen_zir.decl,
+            .gen_suspend => return scope.cast(GenZir).?.decl,
+            .gen_nosuspend => return scope.cast(Nosuspend).?.gen_zir.decl,
             .file => null,
             .container => null,
         };
     }
 
-    pub fn srcDecl(self: *Scope) ?*Decl {
-        return switch (self.tag) {
-            .block => self.cast(Block).?.src_decl,
-            .gen_zir => self.cast(GenZIR).?.decl,
-            .local_val => self.cast(LocalVal).?.gen_zir.decl,
-            .local_ptr => self.cast(LocalPtr).?.gen_zir.decl,
-            .gen_suspend => return self.cast(GenZIR).?.decl,
-            .gen_nosuspend => return self.cast(Nosuspend).?.gen_zir.decl,
+    pub fn srcDecl(scope: *Scope) ?*Decl {
+        return switch (scope.tag) {
+            .block => scope.cast(Block).?.src_decl,
+            .gen_zir => scope.cast(GenZir).?.zir_code.decl,
+            .local_val => scope.cast(LocalVal).?.gen_zir.decl,
+            .local_ptr => scope.cast(LocalPtr).?.gen_zir.decl,
+            .gen_suspend => return scope.cast(GenZir).?.decl,
+            .gen_nosuspend => return scope.cast(Nosuspend).?.gen_zir.decl,
             .file => null,
             .container => null,
         };
     }
 
     /// Asserts the scope has a parent which is a Container and returns it.
-    pub fn namespace(self: *Scope) *Container {
-        switch (self.tag) {
-            .block => return self.cast(Block).?.owner_decl.container,
-            .gen_zir => return self.cast(GenZIR).?.decl.container,
-            .local_val => return self.cast(LocalVal).?.gen_zir.decl.container,
-            .local_ptr => return self.cast(LocalPtr).?.gen_zir.decl.container,
-            .file => return &self.cast(File).?.root_container,
-            .container => return self.cast(Container).?,
-            .gen_suspend => return self.cast(GenZIR).?.decl.container,
-            .gen_nosuspend => return self.cast(Nosuspend).?.gen_zir.decl.container,
+    pub fn namespace(scope: *Scope) *Container {
+        switch (scope.tag) {
+            .block => return scope.cast(Block).?.sema.owner_decl.container,
+            .gen_zir => return scope.cast(GenZir).?.zir_code.decl.container,
+            .local_val => return scope.cast(LocalVal).?.gen_zir.zir_code.decl.container,
+            .local_ptr => return scope.cast(LocalPtr).?.gen_zir.zir_code.decl.container,
+            .file => return &scope.cast(File).?.root_container,
+            .container => return scope.cast(Container).?,
+            .gen_suspend => return scope.cast(GenZir).?.zir_code.decl.container,
+            .gen_nosuspend => return scope.cast(Nosuspend).?.gen_zir.zir_code.decl.container,
         }
     }
 
     /// Must generate unique bytes with no collisions with other decls.
     /// The point of hashing here is only to limit the number of bytes of
     /// the unique identifier to a fixed size (16 bytes).
-    pub fn fullyQualifiedNameHash(self: *Scope, name: []const u8) NameHash {
-        switch (self.tag) {
+    pub fn fullyQualifiedNameHash(scope: *Scope, name: []const u8) NameHash {
+        switch (scope.tag) {
             .block => unreachable,
             .gen_zir => unreachable,
             .local_val => unreachable,
@@ -433,32 +448,32 @@ pub const Scope = struct {
             .gen_suspend => unreachable,
             .gen_nosuspend => unreachable,
             .file => unreachable,
-            .container => return self.cast(Container).?.fullyQualifiedNameHash(name),
+            .container => return scope.cast(Container).?.fullyQualifiedNameHash(name),
         }
     }
 
     /// Asserts the scope is a child of a File and has an AST tree and returns the tree.
-    pub fn tree(self: *Scope) *const ast.Tree {
-        switch (self.tag) {
-            .file => return &self.cast(File).?.tree,
-            .block => return &self.cast(Block).?.src_decl.container.file_scope.tree,
-            .gen_zir => return &self.cast(GenZIR).?.decl.container.file_scope.tree,
-            .local_val => return &self.cast(LocalVal).?.gen_zir.decl.container.file_scope.tree,
-            .local_ptr => return &self.cast(LocalPtr).?.gen_zir.decl.container.file_scope.tree,
-            .container => return &self.cast(Container).?.file_scope.tree,
-            .gen_suspend => return &self.cast(GenZIR).?.decl.container.file_scope.tree,
-            .gen_nosuspend => return &self.cast(Nosuspend).?.gen_zir.decl.container.file_scope.tree,
-        }
-    }
-
-    /// Asserts the scope is a child of a `GenZIR` and returns it.
-    pub fn getGenZIR(self: *Scope) *GenZIR {
-        return switch (self.tag) {
+    pub fn tree(scope: *Scope) *const ast.Tree {
+        switch (scope.tag) {
+            .file => return &scope.cast(File).?.tree,
+            .block => return &scope.cast(Block).?.src_decl.container.file_scope.tree,
+            .gen_zir => return &scope.cast(GenZir).?.decl.container.file_scope.tree,
+            .local_val => return &scope.cast(LocalVal).?.gen_zir.decl.container.file_scope.tree,
+            .local_ptr => return &scope.cast(LocalPtr).?.gen_zir.decl.container.file_scope.tree,
+            .container => return &scope.cast(Container).?.file_scope.tree,
+            .gen_suspend => return &scope.cast(GenZir).?.decl.container.file_scope.tree,
+            .gen_nosuspend => return &scope.cast(Nosuspend).?.gen_zir.decl.container.file_scope.tree,
+        }
+    }
+
+    /// Asserts the scope is a child of a `GenZir` and returns it.
+    pub fn getGenZir(scope: *Scope) *GenZir {
+        return switch (scope.tag) {
             .block => unreachable,
-            .gen_zir, .gen_suspend => self.cast(GenZIR).?,
-            .local_val => return self.cast(LocalVal).?.gen_zir,
-            .local_ptr => return self.cast(LocalPtr).?.gen_zir,
-            .gen_nosuspend => return self.cast(Nosuspend).?.gen_zir,
+            .gen_zir, .gen_suspend => scope.cast(GenZir).?,
+            .local_val => return scope.cast(LocalVal).?.gen_zir,
+            .local_ptr => return scope.cast(LocalPtr).?.gen_zir,
+            .gen_nosuspend => return scope.cast(Nosuspend).?.gen_zir,
             .file => unreachable,
             .container => unreachable,
         };
@@ -499,25 +514,25 @@ pub const Scope = struct {
             cur = switch (cur.tag) {
                 .container => return @fieldParentPtr(Container, "base", cur).file_scope,
                 .file => return @fieldParentPtr(File, "base", cur),
-                .gen_zir => @fieldParentPtr(GenZIR, "base", cur).parent,
+                .gen_zir => @fieldParentPtr(GenZir, "base", cur).parent,
                 .local_val => @fieldParentPtr(LocalVal, "base", cur).parent,
                 .local_ptr => @fieldParentPtr(LocalPtr, "base", cur).parent,
                 .block => return @fieldParentPtr(Block, "base", cur).src_decl.container.file_scope,
-                .gen_suspend => @fieldParentPtr(GenZIR, "base", cur).parent,
+                .gen_suspend => @fieldParentPtr(GenZir, "base", cur).parent,
                 .gen_nosuspend => @fieldParentPtr(Nosuspend, "base", cur).parent,
             };
         }
     }
 
-    pub fn getSuspend(base: *Scope) ?*Scope.GenZIR {
+    pub fn getSuspend(base: *Scope) ?*Scope.GenZir {
         var cur = base;
         while (true) {
             cur = switch (cur.tag) {
-                .gen_zir => @fieldParentPtr(GenZIR, "base", cur).parent,
+                .gen_zir => @fieldParentPtr(GenZir, "base", cur).parent,
                 .local_val => @fieldParentPtr(LocalVal, "base", cur).parent,
                 .local_ptr => @fieldParentPtr(LocalPtr, "base", cur).parent,
                 .gen_nosuspend => @fieldParentPtr(Nosuspend, "base", cur).parent,
-                .gen_suspend => return @fieldParentPtr(GenZIR, "base", cur),
+                .gen_suspend => return @fieldParentPtr(GenZir, "base", cur),
                 else => return null,
             };
         }
@@ -527,10 +542,10 @@ pub const Scope = struct {
         var cur = base;
         while (true) {
             cur = switch (cur.tag) {
-                .gen_zir => @fieldParentPtr(GenZIR, "base", cur).parent,
+                .gen_zir => @fieldParentPtr(GenZir, "base", cur).parent,
                 .local_val => @fieldParentPtr(LocalVal, "base", cur).parent,
                 .local_ptr => @fieldParentPtr(LocalPtr, "base", cur).parent,
-                .gen_suspend => @fieldParentPtr(GenZIR, "base", cur).parent,
+                .gen_suspend => @fieldParentPtr(GenZir, "base", cur).parent,
                 .gen_nosuspend => return @fieldParentPtr(Nosuspend, "base", cur),
                 else => return null,
             };
@@ -568,19 +583,19 @@ pub const Scope = struct {
         decls: std.AutoArrayHashMapUnmanaged(*Decl, void) = .{},
         ty: Type,
 
-        pub fn deinit(self: *Container, gpa: *Allocator) void {
-            self.decls.deinit(gpa);
+        pub fn deinit(cont: *Container, gpa: *Allocator) void {
+            cont.decls.deinit(gpa);
             // TODO either Container of File should have an arena for sub_file_path and ty
-            gpa.destroy(self.ty.castTag(.empty_struct).?);
-            gpa.free(self.file_scope.sub_file_path);
-            self.* = undefined;
+            gpa.destroy(cont.ty.castTag(.empty_struct).?);
+            gpa.free(cont.file_scope.sub_file_path);
+            cont.* = undefined;
         }
 
-        pub fn removeDecl(self: *Container, child: *Decl) void {
-            _ = self.decls.swapRemove(child);
+        pub fn removeDecl(cont: *Container, child: *Decl) void {
+            _ = cont.decls.swapRemove(child);
         }
 
-        pub fn fullyQualifiedNameHash(self: *Container, name: []const u8) NameHash {
+        pub fn fullyQualifiedNameHash(cont: *Container, name: []const u8) NameHash {
             // TODO container scope qualified names.
             return std.zig.hashSrc(name);
         }
@@ -610,55 +625,55 @@ pub const Scope = struct {
 
         root_container: Container,
 
-        pub fn unload(self: *File, gpa: *Allocator) void {
-            switch (self.status) {
+        pub fn unload(file: *File, gpa: *Allocator) void {
+            switch (file.status) {
                 .never_loaded,
                 .unloaded_parse_failure,
                 .unloaded_success,
                 => {},
 
                 .loaded_success => {
-                    self.tree.deinit(gpa);
-                    self.status = .unloaded_success;
+                    file.tree.deinit(gpa);
+                    file.status = .unloaded_success;
                 },
             }
-            switch (self.source) {
+            switch (file.source) {
                 .bytes => |bytes| {
                     gpa.free(bytes);
-                    self.source = .{ .unloaded = {} };
+                    file.source = .{ .unloaded = {} };
                 },
                 .unloaded => {},
             }
         }
 
-        pub fn deinit(self: *File, gpa: *Allocator) void {
-            self.root_container.deinit(gpa);
-            self.unload(gpa);
-            self.* = undefined;
+        pub fn deinit(file: *File, gpa: *Allocator) void {
+            file.root_container.deinit(gpa);
+            file.unload(gpa);
+            file.* = undefined;
         }
 
-        pub fn destroy(self: *File, gpa: *Allocator) void {
-            self.deinit(gpa);
-            gpa.destroy(self);
+        pub fn destroy(file: *File, gpa: *Allocator) void {
+            file.deinit(gpa);
+            gpa.destroy(file);
         }
 
-        pub fn dumpSrc(self: *File, src: usize) void {
-            const loc = std.zig.findLineColumn(self.source.bytes, src);
-            std.debug.print("{s}:{d}:{d}\n", .{ self.sub_file_path, loc.line + 1, loc.column + 1 });
+        pub fn dumpSrc(file: *File, src: LazySrcLoc) void {
+            const loc = std.zig.findLineColumn(file.source.bytes, src);
+            std.debug.print("{s}:{d}:{d}\n", .{ file.sub_file_path, loc.line + 1, loc.column + 1 });
         }
 
-        pub fn getSource(self: *File, module: *Module) ![:0]const u8 {
-            switch (self.source) {
+        pub fn getSource(file: *File, module: *Module) ![:0]const u8 {
+            switch (file.source) {
                 .unloaded => {
-                    const source = try self.pkg.root_src_directory.handle.readFileAllocOptions(
+                    const source = try file.pkg.root_src_directory.handle.readFileAllocOptions(
                         module.gpa,
-                        self.sub_file_path,
+                        file.sub_file_path,
                         std.math.maxInt(u32),
                         null,
                         1,
                         0,
                     );
-                    self.source = .{ .bytes = source };
+                    file.source = .{ .bytes = source };
                     return source;
                 },
                 .bytes => |bytes| return bytes,
@@ -666,37 +681,30 @@ pub const Scope = struct {
         }
     };
 
-    /// This is a temporary structure, references to it are valid only
+    /// This is the context needed to semantically analyze ZIR instructions and
+    /// produce TZIR instructions.
+    /// This is a temporary structure stored on the stack; references to it are valid only
     /// during semantic analysis of the block.
     pub const Block = struct {
         pub const base_tag: Tag = .block;
 
         base: Scope = Scope{ .tag = base_tag },
         parent: ?*Block,
-        /// Maps ZIR to TZIR. Shared to sub-blocks.
-        inst_table: *InstTable,
-        func: ?*Fn,
-        /// When analyzing an inline function call, owner_decl is the Decl of the caller
-        /// and src_decl is the Decl of the callee.
-        /// This Decl owns the arena memory of this Block.
-        owner_decl: *Decl,
+        /// Shared among all child blocks.
+        sema: *Sema,
         /// This Decl is the Decl according to the Zig source code corresponding to this Block.
+        /// This can vary during inline or comptime function calls. See `Sema.owner_decl`
+        /// for the one that will be the same for all Block instances.
         src_decl: *Decl,
-        instructions: ArrayListUnmanaged(*Inst),
-        /// Points to the arena allocator of the Decl.
-        arena: *Allocator,
+        instructions: ArrayListUnmanaged(*ir.Inst),
         label: ?Label = null,
         inlining: ?*Inlining,
         is_comptime: bool,
-        /// Shared to sub-blocks.
-        branch_quota: *u32,
-
-        pub const InstTable = std.AutoHashMap(*zir.Inst, *Inst);
 
         /// This `Block` maps a block ZIR instruction to the corresponding
         /// TZIR instruction for break instruction analysis.
         pub const Label = struct {
-            zir_block: *zir.Inst.Block,
+            zir_block: zir.Inst.Index,
             merges: Merges,
         };
 
@@ -712,7 +720,7 @@ pub const Scope = struct {
             /// which parameter index they are, without having to store
             /// a parameter index with each arg instruction.
             param_index: usize,
-            casted_args: []*Inst,
+            casted_args: []*ir.Inst,
             merges: Merges,
 
             pub const Shared = struct {
@@ -722,25 +730,25 @@ pub const Scope = struct {
         };
 
         pub const Merges = struct {
-            block_inst: *Inst.Block,
+            block_inst: *ir.Inst.Block,
             /// Separate array list from break_inst_list so that it can be passed directly
             /// to resolvePeerTypes.
-            results: ArrayListUnmanaged(*Inst),
+            results: ArrayListUnmanaged(*ir.Inst),
             /// Keeps track of the break instructions so that the operand can be replaced
             /// if we need to add type coercion at the end of block analysis.
             /// Same indexes, capacity, length as `results`.
-            br_list: ArrayListUnmanaged(*Inst.Br),
+            br_list: ArrayListUnmanaged(*ir.Inst.Br),
         };
 
         /// For debugging purposes.
-        pub fn dump(self: *Block, mod: Module) void {
-            zir.dumpBlock(mod, self);
+        pub fn dump(block: *Block, mod: Module) void {
+            zir.dumpBlock(mod, block);
         }
 
         pub fn makeSubBlock(parent: *Block) Block {
             return .{
                 .parent = parent,
-                .inst_table = parent.inst_table,
+                .inst_map = parent.inst_map,
                 .func = parent.func,
                 .owner_decl = parent.owner_decl,
                 .src_decl = parent.src_decl,
@@ -752,27 +760,186 @@ pub const Scope = struct {
                 .branch_quota = parent.branch_quota,
             };
         }
+
+        pub fn wantSafety(block: *const Block) bool {
+            // TODO take into account scope's safety overrides
+            return switch (block.sema.mod.optimizeMode()) {
+                .Debug => true,
+                .ReleaseSafe => true,
+                .ReleaseFast => false,
+                .ReleaseSmall => false,
+            };
+        }
+
+        pub fn getFileScope(block: *Block) *Scope.File {
+            return block.src_decl.container.file_scope;
+        }
+
+        pub fn addNoOp(
+            block: *Scope.Block,
+            src: LazySrcLoc,
+            ty: Type,
+            comptime tag: ir.Inst.Tag,
+        ) !*ir.Inst {
+            const inst = try block.arena.create(tag.Type());
+            inst.* = .{
+                .base = .{
+                    .tag = tag,
+                    .ty = ty,
+                    .src = src,
+                },
+            };
+            try block.instructions.append(block.sema.gpa, &inst.base);
+            return &inst.base;
+        }
+
+        pub fn addUnOp(
+            block: *Scope.Block,
+            src: LazySrcLoc,
+            ty: Type,
+            tag: ir.Inst.Tag,
+            operand: *ir.Inst,
+        ) !*ir.Inst {
+            const inst = try block.arena.create(ir.Inst.UnOp);
+            inst.* = .{
+                .base = .{
+                    .tag = tag,
+                    .ty = ty,
+                    .src = src,
+                },
+                .operand = operand,
+            };
+            try block.instructions.append(block.sema.gpa, &inst.base);
+            return &inst.base;
+        }
+
+        pub fn addBinOp(
+            block: *Scope.Block,
+            src: LazySrcLoc,
+            ty: Type,
+            tag: ir.Inst.Tag,
+            lhs: *ir.Inst,
+            rhs: *ir.Inst,
+        ) !*ir.Inst {
+            const inst = try block.arena.create(ir.Inst.BinOp);
+            inst.* = .{
+                .base = .{
+                    .tag = tag,
+                    .ty = ty,
+                    .src = src,
+                },
+                .lhs = lhs,
+                .rhs = rhs,
+            };
+            try block.instructions.append(block.sema.gpa, &inst.base);
+            return &inst.base;
+        }
+        pub fn addBr(
+            scope_block: *Scope.Block,
+            src: LazySrcLoc,
+            target_block: *ir.Inst.Block,
+            operand: *ir.Inst,
+        ) !*ir.Inst.Br {
+            const inst = try scope_block.arena.create(ir.Inst.Br);
+            inst.* = .{
+                .base = .{
+                    .tag = .br,
+                    .ty = Type.initTag(.noreturn),
+                    .src = src,
+                },
+                .operand = operand,
+                .block = target_block,
+            };
+            try scope_block.instructions.append(scope_block.sema.gpa, &inst.base);
+            return inst;
+        }
+
+        pub fn addCondBr(
+            block: *Scope.Block,
+            src: LazySrcLoc,
+            condition: *ir.Inst,
+            then_body: ir.Body,
+            else_body: ir.Body,
+        ) !*ir.Inst {
+            const inst = try block.arena.create(ir.Inst.CondBr);
+            inst.* = .{
+                .base = .{
+                    .tag = .condbr,
+                    .ty = Type.initTag(.noreturn),
+                    .src = src,
+                },
+                .condition = condition,
+                .then_body = then_body,
+                .else_body = else_body,
+            };
+            try block.instructions.append(block.sema.gpa, &inst.base);
+            return &inst.base;
+        }
+
+        pub fn addCall(
+            block: *Scope.Block,
+            src: LazySrcLoc,
+            ty: Type,
+            func: *ir.Inst,
+            args: []const *ir.Inst,
+        ) !*ir.Inst {
+            const inst = try block.arena.create(ir.Inst.Call);
+            inst.* = .{
+                .base = .{
+                    .tag = .call,
+                    .ty = ty,
+                    .src = src,
+                },
+                .func = func,
+                .args = args,
+            };
+            try block.instructions.append(block.sema.gpa, &inst.base);
+            return &inst.base;
+        }
+
+        pub fn addSwitchBr(
+            block: *Scope.Block,
+            src: LazySrcLoc,
+            target: *ir.Inst,
+            cases: []ir.Inst.SwitchBr.Case,
+            else_body: ir.Body,
+        ) !*ir.Inst {
+            const inst = try block.arena.create(ir.Inst.SwitchBr);
+            inst.* = .{
+                .base = .{
+                    .tag = .switchbr,
+                    .ty = Type.initTag(.noreturn),
+                    .src = src,
+                },
+                .target = target,
+                .cases = cases,
+                .else_body = else_body,
+            };
+            try block.instructions.append(block.sema.gpa, &inst.base);
+            return &inst.base;
+        }
     };
 
-    /// This is a temporary structure, references to it are valid only
-    /// during semantic analysis of the decl.
-    pub const GenZIR = struct {
+    /// This is a temporary structure; references to it are valid only
+    /// while constructing a `zir.Code`.
+    pub const GenZir = struct {
         pub const base_tag: Tag = .gen_zir;
         base: Scope = Scope{ .tag = base_tag },
-        /// Parents can be: `GenZIR`, `File`
-        parent: *Scope,
-        decl: *Decl,
-        arena: *Allocator,
         force_comptime: bool,
-        /// The first N instructions in a function body ZIR are arg instructions.
-        instructions: std.ArrayListUnmanaged(*zir.Inst) = .{},
+        /// Parents can be: `GenZir`, `File`
+        parent: *Scope,
+        /// All `GenZir` scopes for the same ZIR share this.
+        zir_code: *WipZirCode,
+        /// Keeps track of the list of instructions in this scope only. References
+        /// to instructions in `zir_code`.
+        instructions: std.ArrayListUnmanaged(zir.Inst.Index) = .{},
         label: ?Label = null,
-        break_block: ?*zir.Inst.Block = null,
-        continue_block: ?*zir.Inst.Block = null,
+        break_block: zir.Inst.Index = 0,
+        continue_block: zir.Inst.Index = 0,
         /// Only valid when setBlockResultLoc is called.
         break_result_loc: astgen.ResultLoc = undefined,
         /// When a block has a pointer result location, here it is.
-        rl_ptr: ?*zir.Inst = null,
+        rl_ptr: zir.Inst.Index = 0,
         /// Keeps track of how many branches of a block did not actually
         /// consume the result location. astgen uses this to figure out
         /// whether to rely on break instructions or writing to the result
@@ -784,19 +951,95 @@ pub const Scope = struct {
         break_count: usize = 0,
         /// Tracks `break :foo bar` instructions so they can possibly be elided later if
         /// the labeled block ends up not needing a result location pointer.
-        labeled_breaks: std.ArrayListUnmanaged(*zir.Inst.Break) = .{},
+        labeled_breaks: std.ArrayListUnmanaged(zir.Inst.Index) = .{},
         /// Tracks `store_to_block_ptr` instructions that correspond to break instructions
         /// so they can possibly be elided later if the labeled block ends up not needing
         /// a result location pointer.
-        labeled_store_to_block_ptr_list: std.ArrayListUnmanaged(*zir.Inst.BinOp) = .{},
-        /// for suspend error notes
-        src: usize = 0,
+        labeled_store_to_block_ptr_list: std.ArrayListUnmanaged(zir.Inst.Index) = .{},
 
         pub const Label = struct {
             token: ast.TokenIndex,
-            block_inst: *zir.Inst.Block,
+            block_inst: zir.Inst.Index,
             used: bool = false,
         };
+
+        pub fn addFnTypeCc(gz: *GenZir, args: struct {
+            param_types: []const zir.Inst.Index,
+            ret_ty: zir.Inst.Index,
+            cc: zir.Inst.Index,
+        }) !zir.Inst.Index {
+            const gpa = gz.zir_code.gpa;
+            try gz.instructions.ensureCapacity(gpa, gz.instructions.items + 1);
+            try gz.zir_code.instructions.ensureCapacity(gpa, gz.zir_code.instructions.len + 1);
+            try gz.zir_code.extra.ensureCapacity(gpa, gz.zir_code.extra.len +
+                @typeInfo(zir.Inst.FnTypeCc).Struct.fields.len + args.param_types.len);
+
+            const payload_index = gz.addExtra(zir.Inst.FnTypeCc, .{
+                .cc = args.cc,
+                .param_types_len = @intCast(u32, args.param_types.len),
+            }) catch unreachable; // Capacity is ensured above.
+            gz.zir_code.extra.appendSliceAssumeCapacity(args.param_types);
+
+            const new_index = @intCast(zir.Inst.Index, gz.zir_code.instructions.len);
+            gz.zir_code.instructions.appendAssumeCapacity(.{
+                .tag = .fn_type_cc,
+                .data = .{ .fn_type = .{
+                    .return_type = ret_ty,
+                    .payload_index = payload_index,
+                } },
+            });
+            gz.instructions.appendAssumeCapacity(new_index);
+            return new_index;
+        }
+
+        pub fn addFnType(
+            gz: *GenZir,
+            ret_ty: zir.Inst.Index,
+            param_types: []const zir.Inst.Index,
+        ) !zir.Inst.Index {
+            const gpa = gz.zir_code.gpa;
+            try gz.instructions.ensureCapacity(gpa, gz.instructions.items + 1);
+            try gz.zir_code.instructions.ensureCapacity(gpa, gz.zir_code.instructions.len + 1);
+            try gz.zir_code.extra.ensureCapacity(gpa, gz.zir_code.extra.len +
+                @typeInfo(zir.Inst.FnType).Struct.fields.len + param_types.len);
+
+            const payload_index = gz.addExtra(zir.Inst.FnTypeCc, .{
+                .param_types_len = @intCast(u32, param_types.len),
+            }) catch unreachable; // Capacity is ensured above.
+            gz.zir_code.extra.appendSliceAssumeCapacity(param_types);
+
+            const new_index = @intCast(zir.Inst.Index, gz.zir_code.instructions.len);
+            gz.zir_code.instructions.appendAssumeCapacity(.{
+                .tag = .fn_type_cc,
+                .data = .{ .fn_type = .{
+                    .return_type = ret_ty,
+                    .payload_index = payload_index,
+                } },
+            });
+            gz.instructions.appendAssumeCapacity(new_index);
+            return new_index;
+        }
+
+        pub fn addRetTok(
+            gz: *GenZir,
+            operand: zir.Inst.Index,
+            src_tok: ast.TokenIndex,
+        ) !zir.Inst.Index {
+            const gpa = gz.zir_code.gpa;
+            try gz.instructions.ensureCapacity(gpa, gz.instructions.items + 1);
+            try gz.zir_code.instructions.ensureCapacity(gpa, gz.zir_code.instructions.len + 1);
+
+            const new_index = @intCast(zir.Inst.Index, gz.zir_code.instructions.len);
+            gz.zir_code.instructions.appendAssumeCapacity(.{
+                .tag = .ret_tok,
+                .data = .{ .fn_type = .{
+                    .operand = operand,
+                    .src_tok = src_tok,
+                } },
+            });
+            gz.instructions.appendAssumeCapacity(new_index);
+            return new_index;
+        }
     };
 
     /// This is always a `const` local and importantly the `inst` is a value type, not a pointer.
@@ -805,11 +1048,11 @@ pub const Scope = struct {
     pub const LocalVal = struct {
         pub const base_tag: Tag = .local_val;
         base: Scope = Scope{ .tag = base_tag },
-        /// Parents can be: `LocalVal`, `LocalPtr`, `GenZIR`.
+        /// Parents can be: `LocalVal`, `LocalPtr`, `GenZir`.
         parent: *Scope,
-        gen_zir: *GenZIR,
+        gen_zir: *GenZir,
         name: []const u8,
-        inst: *zir.Inst,
+        inst: zir.Inst.Index,
     };
 
     /// This could be a `const` or `var` local. It has a pointer instead of a value.
@@ -818,24 +1061,42 @@ pub const Scope = struct {
     pub const LocalPtr = struct {
         pub const base_tag: Tag = .local_ptr;
         base: Scope = Scope{ .tag = base_tag },
-        /// Parents can be: `LocalVal`, `LocalPtr`, `GenZIR`.
+        /// Parents can be: `LocalVal`, `LocalPtr`, `GenZir`.
         parent: *Scope,
-        gen_zir: *GenZIR,
+        gen_zir: *GenZir,
         name: []const u8,
-        ptr: *zir.Inst,
+        ptr: zir.Inst.Index,
     };
 
     pub const Nosuspend = struct {
         pub const base_tag: Tag = .gen_nosuspend;
 
         base: Scope = Scope{ .tag = base_tag },
-        /// Parents can be: `LocalVal`, `LocalPtr`, `GenZIR`.
+        /// Parents can be: `LocalVal`, `LocalPtr`, `GenZir`.
         parent: *Scope,
-        gen_zir: *GenZIR,
-        src: usize,
+        gen_zir: *GenZir,
+        src: LazySrcLoc,
     };
 };
 
+/// A Work-In-Progress `zir.Code`. This is a shared parent of all
+/// `GenZir` scopes. Once the `zir.Code` is produced, this struct
+/// is deinitialized.
+pub const WipZirCode = struct {
+    instructions: std.MultiArrayList(zir.Inst) = .{},
+    string_bytes: std.ArrayListUnmanaged(u8) = .{},
+    extra: std.ArrayListUnmanaged(u32) = .{},
+    arg_count: usize = 0,
+    decl: *Decl,
+    gpa: *Allocator,
+    arena: *Allocator,
+
+    fn deinit(wip_zir_code: *WipZirCode) void {
+        wip_zir_code.instructions.deinit(wip_zir_code.gpa);
+        wip_zir_code.extra.deinit(wip_zir_code.gpa);
+    }
+};
+
 /// This struct holds data necessary to construct API-facing `AllErrors.Message`.
 /// Its memory is managed with the general purpose allocator so that they
 /// can be created and destroyed in response to incremental updates.
@@ -855,17 +1116,17 @@ pub const ErrorMsg = struct {
         comptime format: []const u8,
         args: anytype,
     ) !*ErrorMsg {
-        const self = try gpa.create(ErrorMsg);
-        errdefer gpa.destroy(self);
-        self.* = try init(gpa, src_loc, format, args);
-        return self;
+        const err_msg = try gpa.create(ErrorMsg);
+        errdefer gpa.destroy(err_msg);
+        err_msg.* = try init(gpa, src_loc, format, args);
+        return err_msg;
     }
 
     /// Assumes the ErrorMsg struct and msg were both allocated with `gpa`,
     /// as well as all notes.
-    pub fn destroy(self: *ErrorMsg, gpa: *Allocator) void {
-        self.deinit(gpa);
-        gpa.destroy(self);
+    pub fn destroy(err_msg: *ErrorMsg, gpa: *Allocator) void {
+        err_msg.deinit(gpa);
+        gpa.destroy(err_msg);
     }
 
     pub fn init(
@@ -880,84 +1141,231 @@ pub const ErrorMsg = struct {
         };
     }
 
-    pub fn deinit(self: *ErrorMsg, gpa: *Allocator) void {
-        for (self.notes) |*note| {
+    pub fn deinit(err_msg: *ErrorMsg, gpa: *Allocator) void {
+        for (err_msg.notes) |*note| {
             note.deinit(gpa);
         }
-        gpa.free(self.notes);
-        gpa.free(self.msg);
-        self.* = undefined;
+        gpa.free(err_msg.notes);
+        gpa.free(err_msg.msg);
+        err_msg.* = undefined;
     }
 };
 
 /// Canonical reference to a position within a source file.
 pub const SrcLoc = struct {
-    file_scope: *Scope.File,
-    byte_offset: usize,
+    /// The active field is determined by tag of `lazy`.
+    container: union {
+        /// The containing `Decl` according to the source code.
+        decl: *Decl,
+        file_scope: *Scope.File,
+    },
+    /// Relative to `decl`.
+    lazy: LazySrcLoc,
+
+    pub fn fileScope(src_loc: SrcLoc) *Scope.File {
+        return switch (src_loc.lazy) {
+            .unneeded => unreachable,
+            .todo => unreachable,
+
+            .byte_abs,
+            .token_abs,
+            => src_loc.container.file_scope,
+
+            .byte_offset,
+            .token_offset,
+            .node_offset,
+            .node_offset_var_decl_ty,
+            .node_offset_for_cond,
+            .node_offset_builtin_call_arg0,
+            .node_offset_builtin_call_arg1,
+            .node_offset_builtin_call_argn,
+            .node_offset_array_access_index,
+            .node_offset_slice_sentinel,
+            => src_loc.container.decl.container.file_scope,
+        };
+    }
+
+    pub fn byteOffset(src_loc: SrcLoc, mod: *Module) !u32 {
+        switch (src_loc.lazy) {
+            .unneeded => unreachable,
+            .todo => unreachable,
+
+            .byte_abs => |byte_index| return byte_index,
+
+            .token_abs => |tok_index| {
+                const file_scope = src_loc.container.file_scope;
+                const tree = try mod.getAstTree(file_scope);
+                const token_starts = tree.tokens.items(.start);
+                return token_starts[tok_index];
+            },
+            .byte_offset => |byte_off| {
+                const decl = src_loc.container.decl;
+                return decl.srcByteOffset() + byte_off;
+            },
+            .token_offset => |tok_off| {
+                const decl = src_loc.container.decl;
+                const tok_index = decl.srcToken() + tok_off;
+                const tree = try mod.getAstTree(decl.container.file_scope);
+                const token_starts = tree.tokens.items(.start);
+                return token_starts[tok_index];
+            },
+            .node_offset => |node_off| {
+                const decl = src_loc.container.decl;
+                const node_index = decl.srcNode() + node_off;
+                const tree = try mod.getAstTree(decl.container.file_scope);
+                const tok_index = tree.firstToken(node_index);
+                const token_starts = tree.tokens.items(.start);
+                return token_starts[tok_index];
+            },
+            .node_offset_var_decl_ty => @panic("TODO"),
+            .node_offset_for_cond => @panic("TODO"),
+            .node_offset_builtin_call_arg0 => @panic("TODO"),
+            .node_offset_builtin_call_arg1 => @panic("TODO"),
+            .node_offset_builtin_call_argn => unreachable, // Handled specially in `Sema`.
+            .node_offset_array_access_index => @panic("TODO"),
+            .node_offset_slice_sentinel => @panic("TODO"),
+        }
+    }
+};
+
+/// Resolving a source location into a byte offset may require doing work
+/// that we would rather not do unless the error actually occurs.
+/// Therefore we need a data structure that contains the information necessary
+/// to lazily produce a `SrcLoc` as required.
+/// Most of the offsets in this data structure are relative to the containing Decl.
+/// This makes the source location resolve properly even when a Decl gets
+/// shifted up or down in the file, as long as the Decl's contents itself
+/// do not change.
+pub const LazySrcLoc = union(enum) {
+    /// When this tag is set, the code that constructed this `LazySrcLoc` is asserting
+    /// that all code paths which would need to resolve the source location are
+    /// unreachable. If you are debugging this tag incorrectly being this value,
+    /// look into using reverse-continue with a memory watchpoint to see where the
+    /// value is being set to this tag.
+    unneeded,
+    /// Same as `unneeded`, except the code setting up this tag knew that actually
+    /// the source location was needed, and I wanted to get other stuff compiling
+    /// and working before coming back to messing with source locations.
+    /// TODO delete this tag before merging the zir-memory-layout branch.
+    todo,
+    /// The source location points to a byte offset within a source file,
+    /// offset from 0. The source file is determined contextually.
+    /// Inside a `SrcLoc`, the `file_scope` union field will be active.
+    byte_abs: u32,
+    /// The source location points to a token within a source file,
+    /// offset from 0. The source file is determined contextually.
+    /// Inside a `SrcLoc`, the `file_scope` union field will be active.
+    token_abs: u32,
+    /// The source location points to a byte offset within a source file,
+    /// offset from the byte offset of the Decl within the file.
+    /// The Decl is determined contextually.
+    byte_offset: u32,
+    /// This data is the offset into the token list from the Decl token.
+    /// The Decl is determined contextually.
+    token_offset: u32,
+    /// The source location points to an AST node, which is this value offset
+    /// from its containing Decl node AST index.
+    /// The Decl is determined contextually.
+    node_offset: u32,
+    /// The source location points to a variable declaration type expression,
+    /// found by taking this AST node index offset from the containing
+    /// Decl AST node, which points to a variable declaration AST node. Next, navigate
+    /// to the type expression.
+    /// The Decl is determined contextually.
+    node_offset_var_decl_ty: u32,
+    /// The source location points to a for loop condition expression,
+    /// found by taking this AST node index offset from the containing
+    /// Decl AST node, which points to a for loop AST node. Next, navigate
+    /// to the condition expression.
+    /// The Decl is determined contextually.
+    node_offset_for_cond: u32,
+    /// The source location points to the first parameter of a builtin
+    /// function call, found by taking this AST node index offset from the containing
+    /// Decl AST node, which points to a builtin call AST node. Next, navigate
+    /// to the first parameter.
+    /// The Decl is determined contextually.
+    node_offset_builtin_call_arg0: u32,
+    /// Same as `node_offset_builtin_call_arg0` except arg index 1.
+    node_offset_builtin_call_arg1: u32,
+    /// Same as `node_offset_builtin_call_arg0` except the arg index is contextually
+    /// determined.
+    node_offset_builtin_call_argn: u32,
+    /// The source location points to the index expression of an array access
+    /// expression, found by taking this AST node index offset from the containing
+    /// Decl AST node, which points to an array access AST node. Next, navigate
+    /// to the index expression.
+    /// The Decl is determined contextually.
+    node_offset_array_access_index: u32,
+    /// The source location points to the sentinel expression of a slice
+    /// expression, found by taking this AST node index offset from the containing
+    /// Decl AST node, which points to a slice AST node. Next, navigate
+    /// to the sentinel expression.
+    /// The Decl is determined contextually.
+    node_offset_slice_sentinel: u32,
 };
 
 pub const InnerError = error{ OutOfMemory, AnalysisFail };
 
-pub fn deinit(self: *Module) void {
-    const gpa = self.gpa;
+pub fn deinit(mod: *Module) void {
+    const gpa = mod.gpa;
 
-    self.compile_log_text.deinit(gpa);
+    mod.compile_log_text.deinit(gpa);
 
-    self.zig_cache_artifact_directory.handle.close();
+    mod.zig_cache_artifact_directory.handle.close();
 
-    self.deletion_set.deinit(gpa);
+    mod.deletion_set.deinit(gpa);
 
-    for (self.decl_table.items()) |entry| {
-        entry.value.destroy(self);
+    for (mod.decl_table.items()) |entry| {
+        entry.value.destroy(mod);
     }
-    self.decl_table.deinit(gpa);
+    mod.decl_table.deinit(gpa);
 
-    for (self.failed_decls.items()) |entry| {
+    for (mod.failed_decls.items()) |entry| {
         entry.value.destroy(gpa);
     }
-    self.failed_decls.deinit(gpa);
+    mod.failed_decls.deinit(gpa);
 
-    for (self.emit_h_failed_decls.items()) |entry| {
+    for (mod.emit_h_failed_decls.items()) |entry| {
         entry.value.destroy(gpa);
     }
-    self.emit_h_failed_decls.deinit(gpa);
+    mod.emit_h_failed_decls.deinit(gpa);
 
-    for (self.failed_files.items()) |entry| {
+    for (mod.failed_files.items()) |entry| {
         entry.value.destroy(gpa);
     }
-    self.failed_files.deinit(gpa);
+    mod.failed_files.deinit(gpa);
 
-    for (self.failed_exports.items()) |entry| {
+    for (mod.failed_exports.items()) |entry| {
         entry.value.destroy(gpa);
     }
-    self.failed_exports.deinit(gpa);
+    mod.failed_exports.deinit(gpa);
 
-    self.compile_log_decls.deinit(gpa);
+    mod.compile_log_decls.deinit(gpa);
 
-    for (self.decl_exports.items()) |entry| {
+    for (mod.decl_exports.items()) |entry| {
         const export_list = entry.value;
         gpa.free(export_list);
     }
-    self.decl_exports.deinit(gpa);
+    mod.decl_exports.deinit(gpa);
 
-    for (self.export_owners.items()) |entry| {
+    for (mod.export_owners.items()) |entry| {
         freeExportList(gpa, entry.value);
     }
-    self.export_owners.deinit(gpa);
+    mod.export_owners.deinit(gpa);
 
-    self.symbol_exports.deinit(gpa);
-    self.root_scope.destroy(gpa);
+    mod.symbol_exports.deinit(gpa);
+    mod.root_scope.destroy(gpa);
 
-    var it = self.global_error_set.iterator();
+    var it = mod.global_error_set.iterator();
     while (it.next()) |entry| {
         gpa.free(entry.key);
     }
-    self.global_error_set.deinit(gpa);
+    mod.global_error_set.deinit(gpa);
 
-    for (self.import_table.items()) |entry| {
+    for (mod.import_table.items()) |entry| {
         entry.value.destroy(gpa);
     }
-    self.import_table.deinit(gpa);
+    mod.import_table.deinit(gpa);
 }
 
 fn freeExportList(gpa: *Allocator, export_list: []*Export) void {
@@ -1102,28 +1510,37 @@ fn astgenAndSemaDecl(mod: *Module, decl: *Decl) !bool {
             // A comptime decl does not store any value so we can just deinit this arena after analysis is done.
             var analysis_arena = std.heap.ArenaAllocator.init(mod.gpa);
             defer analysis_arena.deinit();
-            var gen_scope: Scope.GenZIR = .{
-                .decl = decl,
-                .arena = &analysis_arena.allocator,
-                .parent = &decl.container.base,
-                .force_comptime = true,
-            };
-            defer gen_scope.instructions.deinit(mod.gpa);
 
-            const block_expr = node_datas[decl_node].lhs;
-            _ = try astgen.comptimeExpr(mod, &gen_scope.base, .none, block_expr);
-            if (std.builtin.mode == .Debug and mod.comp.verbose_ir) {
-                zir.dumpZir(mod.gpa, "comptime_block", decl.name, gen_scope.instructions.items) catch {};
-            }
+            const code: zir.Code = blk: {
+                var wip_zir_code: WipZirCode = .{
+                    .decl = decl,
+                    .arena = &analysis_arena.allocator,
+                    .gpa = mod.gpa,
+                };
+                defer wip_zir_code.deinit();
+                var gen_scope: Scope.GenZir = .{
+                    .force_comptime = true,
+                    .parent = &decl.container.base,
+                    .zir_code = &wip_zir_code,
+                };
 
-            var inst_table = Scope.Block.InstTable.init(mod.gpa);
-            defer inst_table.deinit();
+                const block_expr = node_datas[decl_node].lhs;
+                _ = try astgen.comptimeExpr(mod, &gen_scope.base, .none, block_expr);
+                if (std.builtin.mode == .Debug and mod.comp.verbose_ir) {
+                    zir.dumpZir(mod.gpa, "comptime_block", decl.name, gen_scope.instructions.items) catch {};
+                }
+                break :blk wip_zir_code.finish();
+            };
 
-            var branch_quota: u32 = default_eval_branch_quota;
+            var sema: Sema = .{
+                .mod = mod,
+                .code = code,
+                .inst_map = try mod.gpa.alloc(*ir.Inst, code.instructions.len),
+            };
+            defer mod.gpa.free(sema.inst_map);
 
             var block_scope: Scope.Block = .{
                 .parent = null,
-                .inst_table = &inst_table,
                 .func = null,
                 .owner_decl = decl,
                 .src_decl = decl,
@@ -1131,13 +1548,10 @@ fn astgenAndSemaDecl(mod: *Module, decl: *Decl) !bool {
                 .arena = &analysis_arena.allocator,
                 .inlining = null,
                 .is_comptime = true,
-                .branch_quota = &branch_quota,
             };
             defer block_scope.instructions.deinit(mod.gpa);
 
-            _ = try zir_sema.analyzeBody(mod, &block_scope, .{
-                .instructions = gen_scope.instructions.items,
-            });
+            try sema.root(mod, &block_scope);
 
             decl.analysis = .complete;
             decl.generation = mod.generation;
@@ -1160,7 +1574,6 @@ fn astgenAndSemaFn(
 
     decl.analysis = .in_progress;
 
-    const token_starts = tree.tokens.items(.start);
     const token_tags = tree.tokens.items(.tag);
 
     // This arena allocator's memory is discarded at the end of this function. It is used
@@ -1168,13 +1581,18 @@ fn astgenAndSemaFn(
     // to complete the Decl analysis.
     var fn_type_scope_arena = std.heap.ArenaAllocator.init(mod.gpa);
     defer fn_type_scope_arena.deinit();
-    var fn_type_scope: Scope.GenZIR = .{
+
+    var fn_type_wip_zir_exec: WipZirCode = .{
         .decl = decl,
         .arena = &fn_type_scope_arena.allocator,
-        .parent = &decl.container.base,
+        .gpa = mod.gpa,
+    };
+    defer fn_type_wip_zir_exec.deinit();
+    var fn_type_scope: Scope.GenZir = .{
         .force_comptime = true,
+        .parent = &decl.container.base,
+        .zir_code = &fn_type_wip_zir_exec,
     };
-    defer fn_type_scope.instructions.deinit(mod.gpa);
 
     decl.is_pub = fn_proto.visib_token != null;
 
@@ -1189,13 +1607,8 @@ fn astgenAndSemaFn(
         }
         break :blk count;
     };
-    const param_types = try fn_type_scope.arena.alloc(*zir.Inst, param_count);
-    const fn_src = token_starts[fn_proto.ast.fn_token];
-    const type_type = try astgen.addZIRInstConst(mod, &fn_type_scope.base, fn_src, .{
-        .ty = Type.initTag(.type),
-        .val = Value.initTag(.type_type),
-    });
-    const type_type_rl: astgen.ResultLoc = .{ .ty = type_type };
+    const param_types = try fn_type_scope_arena.allocator.alloc(zir.Inst.Index, param_count);
+    const type_type_rl: astgen.ResultLoc = .{ .ty = @enumToInt(zir.Const.type_type) };
 
     var is_var_args = false;
     {
@@ -1301,39 +1714,31 @@ fn astgenAndSemaFn(
     else
         false;
 
-    const cc_inst = if (fn_proto.ast.callconv_expr != 0) cc: {
+    const cc: zir.Inst.Index = if (fn_proto.ast.callconv_expr != 0)
         // TODO instead of enum literal type, this needs to be the
         // std.builtin.CallingConvention enum. We need to implement importing other files
         // and enums in order to fix this.
-        const src = token_starts[tree.firstToken(fn_proto.ast.callconv_expr)];
-        const enum_lit_ty = try astgen.addZIRInstConst(mod, &fn_type_scope.base, src, .{
-            .ty = Type.initTag(.type),
-            .val = Value.initTag(.enum_literal_type),
-        });
-        break :cc try astgen.comptimeExpr(mod, &fn_type_scope.base, .{
-            .ty = enum_lit_ty,
-        }, fn_proto.ast.callconv_expr);
-    } else if (is_extern) cc: {
-        // note: https://github.com/ziglang/zig/issues/5269
-        const src = token_starts[fn_proto.extern_export_token.?];
-        break :cc try astgen.addZIRInst(mod, &fn_type_scope.base, src, zir.Inst.EnumLiteral, .{ .name = "C" }, .{});
-    } else null;
-
-    const fn_type_inst = if (cc_inst) |cc| fn_type: {
-        var fn_type = try astgen.addZirInstTag(mod, &fn_type_scope.base, fn_src, .fn_type_cc, .{
-            .return_type = return_type_inst,
+        try astgen.comptimeExpr(mod, &fn_type_scope.base, .{
+            .ty = @enumToInt(zir.Const.enum_literal_type),
+        }, fn_proto.ast.callconv_expr)
+    else if (is_extern) // note: https://github.com/ziglang/zig/issues/5269
+        try fn_type_scope.addStrBytes(.enum_literal, "C")
+    else
+        0;
+
+    const fn_type_inst: zir.Inst.Index = if (cc != 0) fn_type: {
+        const tag: zir.Inst.Tag = if (is_var_args) .fn_type_cc_var_args else .fn_type_cc;
+        break :fn_type try fn_type_scope.addFnTypeCc(.{
+            .ret_ty = return_type_inst,
             .param_types = param_types,
             .cc = cc,
         });
-        if (is_var_args) fn_type.tag = .fn_type_cc_var_args;
-        break :fn_type fn_type;
     } else fn_type: {
-        var fn_type = try astgen.addZirInstTag(mod, &fn_type_scope.base, fn_src, .fn_type, .{
-            .return_type = return_type_inst,
+        const tag: zir.Inst.Tag = if (is_var_args) .fn_type_var_args else .fn_type;
+        break :fn_type try fn_type_scope.addFnType(.{
+            .ret_ty = return_type_inst,
             .param_types = param_types,
         });
-        if (is_var_args) fn_type.tag = .fn_type_var_args;
-        break :fn_type fn_type;
     };
 
     if (std.builtin.mode == .Debug and mod.comp.verbose_ir) {
@@ -1345,14 +1750,17 @@ fn astgenAndSemaFn(
     errdefer decl_arena.deinit();
     const decl_arena_state = try decl_arena.allocator.create(std.heap.ArenaAllocator.State);
 
-    var inst_table = Scope.Block.InstTable.init(mod.gpa);
-    defer inst_table.deinit();
-
-    var branch_quota: u32 = default_eval_branch_quota;
+    const fn_type_code = fn_type_wip_zir_exec.finish();
+    var fn_type_sema: Sema = .{
+        .mod = mod,
+        .code = fn_type_code,
+        .inst_map = try mod.gpa.alloc(*ir.Inst, fn_type_code.instructions.len),
+    };
+    defer mod.gpa.free(fn_type_sema.inst_map);
 
     var block_scope: Scope.Block = .{
         .parent = null,
-        .inst_table = &inst_table,
+        .sema = &fn_type_sema,
         .func = null,
         .owner_decl = decl,
         .src_decl = decl,
@@ -1360,14 +1768,10 @@ fn astgenAndSemaFn(
         .arena = &decl_arena.allocator,
         .inlining = null,
         .is_comptime = false,
-        .branch_quota = &branch_quota,
     };
     defer block_scope.instructions.deinit(mod.gpa);
 
-    const fn_type = try zir_sema.analyzeBodyValueAsType(mod, &block_scope, fn_type_inst, .{
-        .instructions = fn_type_scope.instructions.items,
-    });
-
+    const fn_type = try fn_type_sema.rootAsType(mod, &block_scope, fn_type_inst);
     if (body_node == 0) {
         if (!is_extern) {
             return mod.failNode(&block_scope.base, fn_proto.ast.fn_token, "non-extern function has no body", .{});
@@ -1411,43 +1815,47 @@ fn astgenAndSemaFn(
 
     const fn_zir: zir.Body = blk: {
         // We put the ZIR inside the Decl arena.
-        var gen_scope: Scope.GenZIR = .{
+        var wip_zir_code: WipZirCode = .{
             .decl = decl,
             .arena = &decl_arena.allocator,
-            .parent = &decl.container.base,
+            .gpa = mod.gpa,
+            .arg_count = param_count,
+        };
+        defer wip_zir_code.deinit();
+
+        var gen_scope: Scope.GenZir = .{
             .force_comptime = false,
+            .parent = &decl.container.base,
+            .zir_code = &wip_zir_code,
         };
-        defer gen_scope.instructions.deinit(mod.gpa);
+        // Iterate over the parameters. We put the param names as the first N
+        // items inside `extra` so that debug info later can refer to the parameter names
+        // even while the respective source code is unloaded.
+        try wip_zir_code.extra.ensureCapacity(mod.gpa, param_count);
 
-        // We need an instruction for each parameter, and they must be first in the body.
-        try gen_scope.instructions.resize(mod.gpa, param_count);
         var params_scope = &gen_scope.base;
         var i: usize = 0;
         var it = fn_proto.iterate(tree);
         while (it.next()) |param| : (i += 1) {
             const name_token = param.name_token.?;
-            const src = token_starts[name_token];
             const param_name = try mod.identifierTokenString(&gen_scope.base, name_token);
-            const arg = try decl_arena.allocator.create(zir.Inst.Arg);
-            arg.* = .{
-                .base = .{
-                    .tag = .arg,
-                    .src = src,
-                },
-                .positionals = .{
-                    .name = param_name,
-                },
-                .kw_args = .{},
-            };
-            gen_scope.instructions.items[i] = &arg.base;
             const sub_scope = try decl_arena.allocator.create(Scope.LocalVal);
             sub_scope.* = .{
                 .parent = params_scope,
                 .gen_zir = &gen_scope,
                 .name = param_name,
-                .inst = &arg.base,
+                // Implicit const list first, then implicit arg list.
+                .inst = zir.const_inst_list.len + i,
             };
             params_scope = &sub_scope.base;
+
+            // Additionally put the param name into `string_bytes` and reference it with
+            // `extra` so that we have access to the data in codegen, for debug info.
+            const str_index = @intCast(u32, wip_zir_code.string_bytes.items.len);
+            wip_zir_code.extra.appendAssumeCapacity(str_index);
+            try wip_zir_code.string_bytes.ensureCapacity(mod.gpa, param_name.len + 1);
+            wip_zir_code.string_bytes.appendSliceAssumeCapacity(param_name);
+            wip_zir_code.string_bytes.appendAssumeCapacity(0);
         }
 
         _ = try astgen.expr(mod, params_scope, .none, body_node);
@@ -1455,8 +1863,7 @@ fn astgenAndSemaFn(
         if (gen_scope.instructions.items.len == 0 or
             !gen_scope.instructions.items[gen_scope.instructions.items.len - 1].tag.isNoReturn())
         {
-            const src = token_starts[tree.lastToken(body_node)];
-            _ = try astgen.addZIRNoOp(mod, &gen_scope.base, src, .return_void);
+            _ = try gen_scope.addRetTok(@enumToInt(zir.Const.void_value), tree.lastToken(body_node));
         }
 
         if (std.builtin.mode == .Debug and mod.comp.verbose_ir) {
@@ -1626,7 +2033,7 @@ fn astgenAndSemaVarDecl(
     const var_info: struct { ty: Type, val: ?Value } = if (var_decl.ast.init_node != 0) vi: {
         var gen_scope_arena = std.heap.ArenaAllocator.init(mod.gpa);
         defer gen_scope_arena.deinit();
-        var gen_scope: Scope.GenZIR = .{
+        var gen_scope: Scope.GenZir = .{
             .decl = decl,
             .arena = &gen_scope_arena.allocator,
             .parent = &decl.container.base,
@@ -1698,7 +2105,7 @@ fn astgenAndSemaVarDecl(
         // Temporary arena for the zir instructions.
         var type_scope_arena = std.heap.ArenaAllocator.init(mod.gpa);
         defer type_scope_arena.deinit();
-        var type_scope: Scope.GenZIR = .{
+        var type_scope: Scope.GenZir = .{
             .decl = decl,
             .arena = &type_scope_arena.allocator,
             .parent = &decl.container.base,
@@ -1778,47 +2185,47 @@ fn astgenAndSemaVarDecl(
     return type_changed;
 }
 
-fn declareDeclDependency(self: *Module, depender: *Decl, dependee: *Decl) !void {
-    try depender.dependencies.ensureCapacity(self.gpa, depender.dependencies.items().len + 1);
-    try dependee.dependants.ensureCapacity(self.gpa, dependee.dependants.items().len + 1);
+fn declareDeclDependency(mod: *Module, depender: *Decl, dependee: *Decl) !void {
+    try depender.dependencies.ensureCapacity(mod.gpa, depender.dependencies.items().len + 1);
+    try dependee.dependants.ensureCapacity(mod.gpa, dependee.dependants.items().len + 1);
 
     depender.dependencies.putAssumeCapacity(dependee, {});
     dependee.dependants.putAssumeCapacity(depender, {});
 }
 
-pub fn getAstTree(self: *Module, root_scope: *Scope.File) !*const ast.Tree {
+pub fn getAstTree(mod: *Module, root_scope: *Scope.File) !*const ast.Tree {
     const tracy = trace(@src());
     defer tracy.end();
 
     switch (root_scope.status) {
         .never_loaded, .unloaded_success => {
-            try self.failed_files.ensureCapacity(self.gpa, self.failed_files.items().len + 1);
+            try mod.failed_files.ensureCapacity(mod.gpa, mod.failed_files.items().len + 1);
 
-            const source = try root_scope.getSource(self);
+            const source = try root_scope.getSource(mod);
 
             var keep_tree = false;
-            root_scope.tree = try std.zig.parse(self.gpa, source);
-            defer if (!keep_tree) root_scope.tree.deinit(self.gpa);
+            root_scope.tree = try std.zig.parse(mod.gpa, source);
+            defer if (!keep_tree) root_scope.tree.deinit(mod.gpa);
 
             const tree = &root_scope.tree;
 
             if (tree.errors.len != 0) {
                 const parse_err = tree.errors[0];
 
-                var msg = std.ArrayList(u8).init(self.gpa);
+                var msg = std.ArrayList(u8).init(mod.gpa);
                 defer msg.deinit();
 
                 try tree.renderError(parse_err, msg.writer());
-                const err_msg = try self.gpa.create(ErrorMsg);
+                const err_msg = try mod.gpa.create(ErrorMsg);
                 err_msg.* = .{
                     .src_loc = .{
-                        .file_scope = root_scope,
-                        .byte_offset = tree.tokens.items(.start)[parse_err.token],
+                        .container = .{ .file_scope = root_scope },
+                        .lazy = .{ .token_abs = parse_err.token },
                     },
                     .msg = msg.toOwnedSlice(),
                 };
 
-                self.failed_files.putAssumeCapacityNoClobber(&root_scope.base, err_msg);
+                mod.failed_files.putAssumeCapacityNoClobber(&root_scope.base, err_msg);
                 root_scope.status = .unloaded_parse_failure;
                 return error.AnalysisFail;
             }
@@ -2051,11 +2458,9 @@ fn semaContainerFn(
     const tracy = trace(@src());
     defer tracy.end();
 
-    const token_starts = tree.tokens.items(.start);
-    const token_tags = tree.tokens.items(.tag);
-
     // We will create a Decl for it regardless of analysis status.
     const name_tok = fn_proto.name_token orelse {
+        // This problem will go away with #1717.
         @panic("TODO missing function name");
     };
     const name = tree.tokenSlice(name_tok); // TODO use identifierTokenString
@@ -2068,8 +2473,8 @@ fn semaContainerFn(
         if (deleted_decls.swapRemove(decl) == null) {
             decl.analysis = .sema_failure;
             const msg = try ErrorMsg.create(mod.gpa, .{
-                .file_scope = container_scope.file_scope,
-                .byte_offset = token_starts[name_tok],
+                .container = .{ .file_scope = container_scope.file_scope },
+                .lazy = .{ .token_abs = name_tok },
             }, "redefinition of '{s}'", .{decl.name});
             errdefer msg.destroy(mod.gpa);
             try mod.failed_decls.putNoClobber(mod.gpa, decl, msg);
@@ -2098,6 +2503,7 @@ fn semaContainerFn(
         const new_decl = try mod.createNewDecl(&container_scope.base, name, decl_i, name_hash, contents_hash);
         container_scope.decls.putAssumeCapacity(new_decl, {});
         if (fn_proto.extern_export_token) |maybe_export_token| {
+            const token_tags = tree.tokens.items(.tag);
             if (token_tags[maybe_export_token] == .keyword_export) {
                 mod.comp.work_queue.writeItemAssumeCapacity(.{ .analyze_decl = new_decl });
             }
@@ -2117,11 +2523,7 @@ fn semaContainerVar(
     const tracy = trace(@src());
     defer tracy.end();
 
-    const token_starts = tree.tokens.items(.start);
-    const token_tags = tree.tokens.items(.tag);
-
     const name_token = var_decl.ast.mut_token + 1;
-    const name_src = token_starts[name_token];
     const name = tree.tokenSlice(name_token); // TODO identifierTokenString
     const name_hash = container_scope.fullyQualifiedNameHash(name);
     const contents_hash = std.zig.hashSrc(tree.getNodeSource(decl_node));
@@ -2132,8 +2534,8 @@ fn semaContainerVar(
         if (deleted_decls.swapRemove(decl) == null) {
             decl.analysis = .sema_failure;
             const err_msg = try ErrorMsg.create(mod.gpa, .{
-                .file_scope = container_scope.file_scope,
-                .byte_offset = name_src,
+                .container = .{ .file_scope = container_scope.file_scope },
+                .lazy = .{ .token_abs = name_token },
             }, "redefinition of '{s}'", .{decl.name});
             errdefer err_msg.destroy(mod.gpa);
             try mod.failed_decls.putNoClobber(mod.gpa, decl, err_msg);
@@ -2145,6 +2547,7 @@ fn semaContainerVar(
         const new_decl = try mod.createNewDecl(&container_scope.base, name, decl_i, name_hash, contents_hash);
         container_scope.decls.putAssumeCapacity(new_decl, {});
         if (var_decl.extern_export_token) |maybe_export_token| {
+            const token_tags = tree.tokens.items(.tag);
             if (token_tags[maybe_export_token] == .keyword_export) {
                 mod.comp.work_queue.writeItemAssumeCapacity(.{ .analyze_decl = new_decl });
             }
@@ -2167,11 +2570,11 @@ fn semaContainerField(
     log.err("TODO: analyze container field", .{});
 }
 
-pub fn deleteDecl(self: *Module, decl: *Decl) !void {
+pub fn deleteDecl(mod: *Module, decl: *Decl) !void {
     const tracy = trace(@src());
     defer tracy.end();
 
-    try self.deletion_set.ensureCapacity(self.gpa, self.deletion_set.items.len + decl.dependencies.items().len);
+    try mod.deletion_set.ensureCapacity(mod.gpa, mod.deletion_set.items.len + decl.dependencies.items().len);
 
     // Remove from the namespace it resides in. In the case of an anonymous Decl it will
     // not be present in the set, and this does nothing.
@@ -2179,7 +2582,7 @@ pub fn deleteDecl(self: *Module, decl: *Decl) !void {
 
     log.debug("deleting decl '{s}'", .{decl.name});
     const name_hash = decl.fullyQualifiedNameHash();
-    self.decl_table.removeAssertDiscard(name_hash);
+    mod.decl_table.removeAssertDiscard(name_hash);
     // Remove itself from its dependencies, because we are about to destroy the decl pointer.
     for (decl.dependencies.items()) |entry| {
         const dep = entry.key;
@@ -2188,7 +2591,7 @@ pub fn deleteDecl(self: *Module, decl: *Decl) !void {
             // We don't recursively perform a deletion here, because during the update,
             // another reference to it may turn up.
             dep.deletion_flag = true;
-            self.deletion_set.appendAssumeCapacity(dep);
+            mod.deletion_set.appendAssumeCapacity(dep);
         }
     }
     // Anything that depends on this deleted decl certainly needs to be re-analyzed.
@@ -2197,29 +2600,29 @@ pub fn deleteDecl(self: *Module, decl: *Decl) !void {
         dep.removeDependency(decl);
         if (dep.analysis != .outdated) {
             // TODO Move this failure possibility to the top of the function.
-            try self.markOutdatedDecl(dep);
+            try mod.markOutdatedDecl(dep);
         }
     }
-    if (self.failed_decls.swapRemove(decl)) |entry| {
-        entry.value.destroy(self.gpa);
+    if (mod.failed_decls.swapRemove(decl)) |entry| {
+        entry.value.destroy(mod.gpa);
     }
-    if (self.emit_h_failed_decls.swapRemove(decl)) |entry| {
-        entry.value.destroy(self.gpa);
+    if (mod.emit_h_failed_decls.swapRemove(decl)) |entry| {
+        entry.value.destroy(mod.gpa);
     }
-    _ = self.compile_log_decls.swapRemove(decl);
-    self.deleteDeclExports(decl);
-    self.comp.bin_file.freeDecl(decl);
+    _ = mod.compile_log_decls.swapRemove(decl);
+    mod.deleteDeclExports(decl);
+    mod.comp.bin_file.freeDecl(decl);
 
-    decl.destroy(self);
+    decl.destroy(mod);
 }
 
 /// Delete all the Export objects that are caused by this Decl. Re-analysis of
 /// this Decl will cause them to be re-created (or not).
-fn deleteDeclExports(self: *Module, decl: *Decl) void {
-    const kv = self.export_owners.swapRemove(decl) orelse return;
+fn deleteDeclExports(mod: *Module, decl: *Decl) void {
+    const kv = mod.export_owners.swapRemove(decl) orelse return;
 
     for (kv.value) |exp| {
-        if (self.decl_exports.getEntry(exp.exported_decl)) |decl_exports_kv| {
+        if (mod.decl_exports.getEntry(exp.exported_decl)) |decl_exports_kv| {
             // Remove exports with owner_decl matching the regenerating decl.
             const list = decl_exports_kv.value;
             var i: usize = 0;
@@ -2232,73 +2635,100 @@ fn deleteDeclExports(self: *Module, decl: *Decl) void {
                     i += 1;
                 }
             }
-            decl_exports_kv.value = self.gpa.shrink(list, new_len);
+            decl_exports_kv.value = mod.gpa.shrink(list, new_len);
             if (new_len == 0) {
-                self.decl_exports.removeAssertDiscard(exp.exported_decl);
+                mod.decl_exports.removeAssertDiscard(exp.exported_decl);
             }
         }
-        if (self.comp.bin_file.cast(link.File.Elf)) |elf| {
+        if (mod.comp.bin_file.cast(link.File.Elf)) |elf| {
             elf.deleteExport(exp.link.elf);
         }
-        if (self.comp.bin_file.cast(link.File.MachO)) |macho| {
+        if (mod.comp.bin_file.cast(link.File.MachO)) |macho| {
             macho.deleteExport(exp.link.macho);
         }
-        if (self.failed_exports.swapRemove(exp)) |entry| {
-            entry.value.destroy(self.gpa);
+        if (mod.failed_exports.swapRemove(exp)) |entry| {
+            entry.value.destroy(mod.gpa);
         }
-        _ = self.symbol_exports.swapRemove(exp.options.name);
-        self.gpa.free(exp.options.name);
-        self.gpa.destroy(exp);
+        _ = mod.symbol_exports.swapRemove(exp.options.name);
+        mod.gpa.free(exp.options.name);
+        mod.gpa.destroy(exp);
     }
-    self.gpa.free(kv.value);
+    mod.gpa.free(kv.value);
 }
 
-pub fn analyzeFnBody(self: *Module, decl: *Decl, func: *Fn) !void {
+pub fn analyzeFnBody(mod: *Module, decl: *Decl, func: *Fn) !void {
     const tracy = trace(@src());
     defer tracy.end();
 
     // Use the Decl's arena for function memory.
-    var arena = decl.typed_value.most_recent.arena.?.promote(self.gpa);
+    var arena = decl.typed_value.most_recent.arena.?.promote(mod.gpa);
     defer decl.typed_value.most_recent.arena.?.* = arena.state;
-    var inst_table = Scope.Block.InstTable.init(self.gpa);
-    defer inst_table.deinit();
-    var branch_quota: u32 = default_eval_branch_quota;
+
+    const inst_map = try mod.gpa.alloc(*ir.Inst, func.zir.instructions.len);
+    defer mod.gpa.free(inst_map);
+
+    const fn_ty = decl.typed_value.most_recent.typed_value.ty;
+    const param_inst_list = try mod.gpa.alloc(*ir.Inst, fn_ty.fnParamLen());
+    defer mod.gpa.free(param_inst_list);
+
+    for (param_inst_list) |*param_inst, param_index| {
+        const param_type = fn_ty.fnParamType(param_index);
+        const name = func.zir.nullTerminatedString(func.zir.extra[param_index]);
+        const arg_inst = try arena.allocator.create(ir.Inst.Arg);
+        arg_inst.* = .{
+            .base = .{
+                .tag = .arg,
+                .ty = param_type,
+                .src = .unneeded,
+            },
+            .name = name,
+        };
+        param_inst.* = &arg_inst.base;
+    }
+
+    var sema: Sema = .{
+        .mod = mod,
+        .gpa = mod.gpa,
+        .arena = &arena.allocator,
+        .code = func.zir,
+        .inst_map = inst_map,
+        .owner_decl = decl,
+        .func = func,
+        .param_inst_list = param_inst_list,
+    };
 
     var inner_block: Scope.Block = .{
         .parent = null,
-        .inst_table = &inst_table,
-        .func = func,
-        .owner_decl = decl,
+        .sema = &sema,
         .src_decl = decl,
         .instructions = .{},
         .arena = &arena.allocator,
         .inlining = null,
         .is_comptime = false,
-        .branch_quota = &branch_quota,
     };
-    defer inner_block.instructions.deinit(self.gpa);
+    defer inner_block.instructions.deinit(mod.gpa);
 
     func.state = .in_progress;
     log.debug("set {s} to in_progress", .{decl.name});
 
-    try zir_sema.analyzeBody(self, &inner_block, func.zir);
+    try sema.root(&inner_block);
 
-    const instructions = try arena.allocator.dupe(*Inst, inner_block.instructions.items);
+    const instructions = try arena.allocator.dupe(*ir.Inst, inner_block.instructions.items);
     func.state = .success;
     func.body = .{ .instructions = instructions };
     log.debug("set {s} to success", .{decl.name});
 }
 
-fn markOutdatedDecl(self: *Module, decl: *Decl) !void {
+fn markOutdatedDecl(mod: *Module, decl: *Decl) !void {
     log.debug("mark {s} outdated", .{decl.name});
-    try self.comp.work_queue.writeItem(.{ .analyze_decl = decl });
-    if (self.failed_decls.swapRemove(decl)) |entry| {
-        entry.value.destroy(self.gpa);
+    try mod.comp.work_queue.writeItem(.{ .analyze_decl = decl });
+    if (mod.failed_decls.swapRemove(decl)) |entry| {
+        entry.value.destroy(mod.gpa);
     }
-    if (self.emit_h_failed_decls.swapRemove(decl)) |entry| {
-        entry.value.destroy(self.gpa);
+    if (mod.emit_h_failed_decls.swapRemove(decl)) |entry| {
+        entry.value.destroy(mod.gpa);
     }
-    _ = self.compile_log_decls.swapRemove(decl);
+    _ = mod.compile_log_decls.swapRemove(decl);
     decl.analysis = .outdated;
 }
 
@@ -2349,65 +2779,37 @@ fn allocateNewDecl(
 }
 
 fn createNewDecl(
-    self: *Module,
+    mod: *Module,
     scope: *Scope,
     decl_name: []const u8,
     src_index: usize,
     name_hash: Scope.NameHash,
     contents_hash: std.zig.SrcHash,
 ) !*Decl {
-    try self.decl_table.ensureCapacity(self.gpa, self.decl_table.items().len + 1);
-    const new_decl = try self.allocateNewDecl(scope, src_index, contents_hash);
-    errdefer self.gpa.destroy(new_decl);
-    new_decl.name = try mem.dupeZ(self.gpa, u8, decl_name);
-    self.decl_table.putAssumeCapacityNoClobber(name_hash, new_decl);
+    try mod.decl_table.ensureCapacity(mod.gpa, mod.decl_table.items().len + 1);
+    const new_decl = try mod.allocateNewDecl(scope, src_index, contents_hash);
+    errdefer mod.gpa.destroy(new_decl);
+    new_decl.name = try mem.dupeZ(mod.gpa, u8, decl_name);
+    mod.decl_table.putAssumeCapacityNoClobber(name_hash, new_decl);
     return new_decl;
 }
 
 /// Get error value for error tag `name`.
-pub fn getErrorValue(self: *Module, name: []const u8) !std.StringHashMapUnmanaged(u16).Entry {
-    const gop = try self.global_error_set.getOrPut(self.gpa, name);
+pub fn getErrorValue(mod: *Module, name: []const u8) !std.StringHashMapUnmanaged(u16).Entry {
+    const gop = try mod.global_error_set.getOrPut(mod.gpa, name);
     if (gop.found_existing)
         return gop.entry.*;
-    errdefer self.global_error_set.removeAssertDiscard(name);
+    errdefer mod.global_error_set.removeAssertDiscard(name);
 
-    gop.entry.key = try self.gpa.dupe(u8, name);
-    gop.entry.value = @intCast(u16, self.global_error_set.count() - 1);
+    gop.entry.key = try mod.gpa.dupe(u8, name);
+    gop.entry.value = @intCast(u16, mod.global_error_set.count() - 1);
     return gop.entry.*;
 }
 
-pub fn requireFunctionBlock(self: *Module, scope: *Scope, src: usize) !*Scope.Block {
-    return scope.cast(Scope.Block) orelse
-        return self.fail(scope, src, "instruction illegal outside function body", .{});
-}
-
-pub fn requireRuntimeBlock(self: *Module, scope: *Scope, src: usize) !*Scope.Block {
-    const block = try self.requireFunctionBlock(scope, src);
-    if (block.is_comptime) {
-        return self.fail(scope, src, "unable to resolve comptime value", .{});
-    }
-    return block;
-}
-
-pub fn resolveConstValue(self: *Module, scope: *Scope, base: *Inst) !Value {
-    return (try self.resolveDefinedValue(scope, base)) orelse
-        return self.fail(scope, base.src, "unable to resolve comptime value", .{});
-}
-
-pub fn resolveDefinedValue(self: *Module, scope: *Scope, base: *Inst) !?Value {
-    if (base.value()) |val| {
-        if (val.isUndef()) {
-            return self.fail(scope, base.src, "use of undefined value here causes undefined behavior", .{});
-        }
-        return val;
-    }
-    return null;
-}
-
 pub fn analyzeExport(
     mod: *Module,
     scope: *Scope,
-    src: usize,
+    src: LazySrcLoc,
     borrowed_symbol_name: []const u8,
     exported_decl: *Decl,
 ) !void {
@@ -2496,178 +2898,11 @@ pub fn analyzeExport(
         },
     };
 }
-
-pub fn addNoOp(
-    self: *Module,
-    block: *Scope.Block,
-    src: usize,
-    ty: Type,
-    comptime tag: Inst.Tag,
-) !*Inst {
-    const inst = try block.arena.create(tag.Type());
-    inst.* = .{
-        .base = .{
-            .tag = tag,
-            .ty = ty,
-            .src = src,
-        },
-    };
-    try block.instructions.append(self.gpa, &inst.base);
-    return &inst.base;
-}
-
-pub fn addUnOp(
-    self: *Module,
-    block: *Scope.Block,
-    src: usize,
-    ty: Type,
-    tag: Inst.Tag,
-    operand: *Inst,
-) !*Inst {
-    const inst = try block.arena.create(Inst.UnOp);
-    inst.* = .{
-        .base = .{
-            .tag = tag,
-            .ty = ty,
-            .src = src,
-        },
-        .operand = operand,
-    };
-    try block.instructions.append(self.gpa, &inst.base);
-    return &inst.base;
-}
-
-pub fn addBinOp(
-    self: *Module,
-    block: *Scope.Block,
-    src: usize,
-    ty: Type,
-    tag: Inst.Tag,
-    lhs: *Inst,
-    rhs: *Inst,
-) !*Inst {
-    const inst = try block.arena.create(Inst.BinOp);
-    inst.* = .{
-        .base = .{
-            .tag = tag,
-            .ty = ty,
-            .src = src,
-        },
-        .lhs = lhs,
-        .rhs = rhs,
-    };
-    try block.instructions.append(self.gpa, &inst.base);
-    return &inst.base;
-}
-
-pub fn addArg(self: *Module, block: *Scope.Block, src: usize, ty: Type, name: [*:0]const u8) !*Inst {
-    const inst = try block.arena.create(Inst.Arg);
-    inst.* = .{
-        .base = .{
-            .tag = .arg,
-            .ty = ty,
-            .src = src,
-        },
-        .name = name,
-    };
-    try block.instructions.append(self.gpa, &inst.base);
-    return &inst.base;
-}
-
-pub fn addBr(
-    self: *Module,
-    scope_block: *Scope.Block,
-    src: usize,
-    target_block: *Inst.Block,
-    operand: *Inst,
-) !*Inst.Br {
-    const inst = try scope_block.arena.create(Inst.Br);
-    inst.* = .{
-        .base = .{
-            .tag = .br,
-            .ty = Type.initTag(.noreturn),
-            .src = src,
-        },
-        .operand = operand,
-        .block = target_block,
-    };
-    try scope_block.instructions.append(self.gpa, &inst.base);
-    return inst;
-}
-
-pub fn addCondBr(
-    self: *Module,
-    block: *Scope.Block,
-    src: usize,
-    condition: *Inst,
-    then_body: ir.Body,
-    else_body: ir.Body,
-) !*Inst {
-    const inst = try block.arena.create(Inst.CondBr);
-    inst.* = .{
-        .base = .{
-            .tag = .condbr,
-            .ty = Type.initTag(.noreturn),
-            .src = src,
-        },
-        .condition = condition,
-        .then_body = then_body,
-        .else_body = else_body,
-    };
-    try block.instructions.append(self.gpa, &inst.base);
-    return &inst.base;
-}
-
-pub fn addCall(
-    self: *Module,
-    block: *Scope.Block,
-    src: usize,
-    ty: Type,
-    func: *Inst,
-    args: []const *Inst,
-) !*Inst {
-    const inst = try block.arena.create(Inst.Call);
-    inst.* = .{
-        .base = .{
-            .tag = .call,
-            .ty = ty,
-            .src = src,
-        },
-        .func = func,
-        .args = args,
-    };
-    try block.instructions.append(self.gpa, &inst.base);
-    return &inst.base;
-}
-
-pub fn addSwitchBr(
-    self: *Module,
-    block: *Scope.Block,
-    src: usize,
-    target: *Inst,
-    cases: []Inst.SwitchBr.Case,
-    else_body: ir.Body,
-) !*Inst {
-    const inst = try block.arena.create(Inst.SwitchBr);
-    inst.* = .{
-        .base = .{
-            .tag = .switchbr,
-            .ty = Type.initTag(.noreturn),
-            .src = src,
-        },
-        .target = target,
-        .cases = cases,
-        .else_body = else_body,
-    };
-    try block.instructions.append(self.gpa, &inst.base);
-    return &inst.base;
-}
-
-pub fn constInst(self: *Module, scope: *Scope, src: usize, typed_value: TypedValue) !*Inst {
-    const const_inst = try scope.arena().create(Inst.Constant);
+pub fn constInst(mod: *Module, arena: *Allocator, src: LazySrcLoc, typed_value: TypedValue) !*ir.Inst {
+    const const_inst = try arena.create(ir.Inst.Constant);
     const_inst.* = .{
         .base = .{
-            .tag = Inst.Constant.base_tag,
+            .tag = ir.Inst.Constant.base_tag,
             .ty = typed_value.ty,
             .src = src,
         },
@@ -2676,94 +2911,94 @@ pub fn constInst(self: *Module, scope: *Scope, src: usize, typed_value: TypedVal
     return &const_inst.base;
 }
 
-pub fn constType(self: *Module, scope: *Scope, src: usize, ty: Type) !*Inst {
-    return self.constInst(scope, src, .{
+pub fn constType(mod: *Module, arena: *Allocator, src: LazySrcLoc, ty: Type) !*ir.Inst {
+    return mod.constInst(arena, src, .{
         .ty = Type.initTag(.type),
-        .val = try ty.toValue(scope.arena()),
+        .val = try ty.toValue(arena),
     });
 }
 
-pub fn constVoid(self: *Module, scope: *Scope, src: usize) !*Inst {
-    return self.constInst(scope, src, .{
+pub fn constVoid(mod: *Module, arena: *Allocator, src: LazySrcLoc) !*ir.Inst {
+    return mod.constInst(arena, src, .{
         .ty = Type.initTag(.void),
         .val = Value.initTag(.void_value),
     });
 }
 
-pub fn constNoReturn(self: *Module, scope: *Scope, src: usize) !*Inst {
-    return self.constInst(scope, src, .{
+pub fn constNoReturn(mod: *Module, arena: *Allocator, src: LazySrcLoc) !*ir.Inst {
+    return mod.constInst(arena, src, .{
         .ty = Type.initTag(.noreturn),
         .val = Value.initTag(.unreachable_value),
     });
 }
 
-pub fn constUndef(self: *Module, scope: *Scope, src: usize, ty: Type) !*Inst {
-    return self.constInst(scope, src, .{
+pub fn constUndef(mod: *Module, arena: *Allocator, src: LazySrcLoc, ty: Type) !*ir.Inst {
+    return mod.constInst(arena, src, .{
         .ty = ty,
         .val = Value.initTag(.undef),
     });
 }
 
-pub fn constBool(self: *Module, scope: *Scope, src: usize, v: bool) !*Inst {
-    return self.constInst(scope, src, .{
+pub fn constBool(mod: *Module, arena: *Allocator, src: LazySrcLoc, v: bool) !*ir.Inst {
+    return mod.constInst(arena, src, .{
         .ty = Type.initTag(.bool),
         .val = ([2]Value{ Value.initTag(.bool_false), Value.initTag(.bool_true) })[@boolToInt(v)],
     });
 }
 
-pub fn constIntUnsigned(self: *Module, scope: *Scope, src: usize, ty: Type, int: u64) !*Inst {
-    return self.constInst(scope, src, .{
+pub fn constIntUnsigned(mod: *Module, arena: *Allocator, src: LazySrcLoc, ty: Type, int: u64) !*ir.Inst {
+    return mod.constInst(arena, src, .{
         .ty = ty,
-        .val = try Value.Tag.int_u64.create(scope.arena(), int),
+        .val = try Value.Tag.int_u64.create(arena, int),
     });
 }
 
-pub fn constIntSigned(self: *Module, scope: *Scope, src: usize, ty: Type, int: i64) !*Inst {
-    return self.constInst(scope, src, .{
+pub fn constIntSigned(mod: *Module, arena: *Allocator, src: LazySrcLoc, ty: Type, int: i64) !*ir.Inst {
+    return mod.constInst(arena, src, .{
         .ty = ty,
-        .val = try Value.Tag.int_i64.create(scope.arena(), int),
+        .val = try Value.Tag.int_i64.create(arena, int),
     });
 }
 
-pub fn constIntBig(self: *Module, scope: *Scope, src: usize, ty: Type, big_int: BigIntConst) !*Inst {
+pub fn constIntBig(mod: *Module, arena: *Allocator, src: LazySrcLoc, ty: Type, big_int: BigIntConst) !*ir.Inst {
     if (big_int.positive) {
         if (big_int.to(u64)) |x| {
-            return self.constIntUnsigned(scope, src, ty, x);
+            return mod.constIntUnsigned(arena, src, ty, x);
         } else |err| switch (err) {
             error.NegativeIntoUnsigned => unreachable,
             error.TargetTooSmall => {}, // handled below
         }
-        return self.constInst(scope, src, .{
+        return mod.constInst(arena, src, .{
             .ty = ty,
-            .val = try Value.Tag.int_big_positive.create(scope.arena(), big_int.limbs),
+            .val = try Value.Tag.int_big_positive.create(arena, big_int.limbs),
         });
     } else {
         if (big_int.to(i64)) |x| {
-            return self.constIntSigned(scope, src, ty, x);
+            return mod.constIntSigned(arena, src, ty, x);
         } else |err| switch (err) {
             error.NegativeIntoUnsigned => unreachable,
             error.TargetTooSmall => {}, // handled below
         }
-        return self.constInst(scope, src, .{
+        return mod.constInst(arena, src, .{
             .ty = ty,
-            .val = try Value.Tag.int_big_negative.create(scope.arena(), big_int.limbs),
+            .val = try Value.Tag.int_big_negative.create(arena, big_int.limbs),
         });
     }
 }
 
 pub fn createAnonymousDecl(
-    self: *Module,
+    mod: *Module,
     scope: *Scope,
     decl_arena: *std.heap.ArenaAllocator,
     typed_value: TypedValue,
 ) !*Decl {
-    const name_index = self.getNextAnonNameIndex();
+    const name_index = mod.getNextAnonNameIndex();
     const scope_decl = scope.ownerDecl().?;
-    const name = try std.fmt.allocPrint(self.gpa, "{s}__anon_{d}", .{ scope_decl.name, name_index });
-    defer self.gpa.free(name);
+    const name = try std.fmt.allocPrint(mod.gpa, "{s}__anon_{d}", .{ scope_decl.name, name_index });
+    defer mod.gpa.free(name);
     const name_hash = scope.namespace().fullyQualifiedNameHash(name);
     const src_hash: std.zig.SrcHash = undefined;
-    const new_decl = try self.createNewDecl(scope, name, scope_decl.src_index, name_hash, src_hash);
+    const new_decl = try mod.createNewDecl(scope, name, scope_decl.src_index, name_hash, src_hash);
     const decl_arena_state = try decl_arena.allocator.create(std.heap.ArenaAllocator.State);
 
     decl_arena_state.* = decl_arena.state;
@@ -2774,32 +3009,32 @@ pub fn createAnonymousDecl(
         },
     };
     new_decl.analysis = .complete;
-    new_decl.generation = self.generation;
+    new_decl.generation = mod.generation;
 
     // TODO: This generates the Decl into the machine code file if it is of a type that is non-zero size.
     // We should be able to further improve the compiler to not omit Decls which are only referenced at
     // compile-time and not runtime.
     if (typed_value.ty.hasCodeGenBits()) {
-        try self.comp.bin_file.allocateDeclIndexes(new_decl);
-        try self.comp.work_queue.writeItem(.{ .codegen_decl = new_decl });
+        try mod.comp.bin_file.allocateDeclIndexes(new_decl);
+        try mod.comp.work_queue.writeItem(.{ .codegen_decl = new_decl });
     }
 
     return new_decl;
 }
 
 pub fn createContainerDecl(
-    self: *Module,
+    mod: *Module,
     scope: *Scope,
     base_token: std.zig.ast.TokenIndex,
     decl_arena: *std.heap.ArenaAllocator,
     typed_value: TypedValue,
 ) !*Decl {
     const scope_decl = scope.ownerDecl().?;
-    const name = try self.getAnonTypeName(scope, base_token);
-    defer self.gpa.free(name);
+    const name = try mod.getAnonTypeName(scope, base_token);
+    defer mod.gpa.free(name);
     const name_hash = scope.namespace().fullyQualifiedNameHash(name);
     const src_hash: std.zig.SrcHash = undefined;
-    const new_decl = try self.createNewDecl(scope, name, scope_decl.src_index, name_hash, src_hash);
+    const new_decl = try mod.createNewDecl(scope, name, scope_decl.src_index, name_hash, src_hash);
     const decl_arena_state = try decl_arena.allocator.create(std.heap.ArenaAllocator.State);
 
     decl_arena_state.* = decl_arena.state;
@@ -2810,12 +3045,12 @@ pub fn createContainerDecl(
         },
     };
     new_decl.analysis = .complete;
-    new_decl.generation = self.generation;
+    new_decl.generation = mod.generation;
 
     return new_decl;
 }
 
-fn getAnonTypeName(self: *Module, scope: *Scope, base_token: std.zig.ast.TokenIndex) ![]u8 {
+fn getAnonTypeName(mod: *Module, scope: *Scope, base_token: std.zig.ast.TokenIndex) ![]u8 {
     // TODO add namespaces, generic function signatrues
     const tree = scope.tree();
     const token_tags = tree.tokens.items(.tag);
@@ -2827,845 +3062,125 @@ fn getAnonTypeName(self: *Module, scope: *Scope, base_token: std.zig.ast.TokenIn
         else => unreachable,
     };
     const loc = tree.tokenLocation(0, base_token);
-    return std.fmt.allocPrint(self.gpa, "{s}:{d}:{d}", .{ base_name, loc.line, loc.column });
+    return std.fmt.allocPrint(mod.gpa, "{s}:{d}:{d}", .{ base_name, loc.line, loc.column });
 }
 
-fn getNextAnonNameIndex(self: *Module) usize {
-    return @atomicRmw(usize, &self.next_anon_name_index, .Add, 1, .Monotonic);
+fn getNextAnonNameIndex(mod: *Module) usize {
+    return @atomicRmw(usize, &mod.next_anon_name_index, .Add, 1, .Monotonic);
 }
 
-pub fn lookupDeclName(self: *Module, scope: *Scope, ident_name: []const u8) ?*Decl {
+pub fn lookupDeclName(mod: *Module, scope: *Scope, ident_name: []const u8) ?*Decl {
     const namespace = scope.namespace();
     const name_hash = namespace.fullyQualifiedNameHash(ident_name);
-    return self.decl_table.get(name_hash);
-}
-
-pub fn analyzeDeclVal(mod: *Module, scope: *Scope, src: usize, decl: *Decl) InnerError!*Inst {
-    const decl_ref = try mod.analyzeDeclRef(scope, src, decl);
-    return mod.analyzeDeref(scope, src, decl_ref, src);
+    return mod.decl_table.get(name_hash);
 }
 
-pub fn analyzeDeclRef(self: *Module, scope: *Scope, src: usize, decl: *Decl) InnerError!*Inst {
-    const scope_decl = scope.ownerDecl().?;
-    try self.declareDeclDependency(scope_decl, decl);
-    self.ensureDeclAnalyzed(decl) catch |err| {
-        if (scope.cast(Scope.Block)) |block| {
-            if (block.func) |func| {
-                func.state = .dependency_failure;
-            } else {
-                block.owner_decl.analysis = .dependency_failure;
-            }
-        } else {
-            scope_decl.analysis = .dependency_failure;
-        }
-        return err;
+fn makeIntType(mod: *Module, scope: *Scope, signed: bool, bits: u16) !Type {
+    const int_payload = try scope.arena().create(Type.Payload.Bits);
+    int_payload.* = .{
+        .base = .{
+            .tag = if (signed) .int_signed else .int_unsigned,
+        },
+        .data = bits,
     };
-
-    const decl_tv = try decl.typedValue();
-    if (decl_tv.val.tag() == .variable) {
-        return self.analyzeVarRef(scope, src, decl_tv);
-    }
-    return self.constInst(scope, src, .{
-        .ty = try self.simplePtrType(scope, src, decl_tv.ty, false, .One),
-        .val = try Value.Tag.decl_ref.create(scope.arena(), decl),
-    });
+    return Type.initPayload(&int_payload.base);
 }
 
-fn analyzeVarRef(self: *Module, scope: *Scope, src: usize, tv: TypedValue) InnerError!*Inst {
-    const variable = tv.val.castTag(.variable).?.data;
-
-    const ty = try self.simplePtrType(scope, src, tv.ty, variable.is_mutable, .One);
-    if (!variable.is_mutable and !variable.is_extern) {
-        return self.constInst(scope, src, .{
-            .ty = ty,
-            .val = try Value.Tag.ref_val.create(scope.arena(), variable.init),
-        });
-    }
+/// We don't return a pointer to the new error note because the pointer
+/// becomes invalid when you add another one.
+pub fn errNote(
+    mod: *Module,
+    scope: *Scope,
+    src: LazySrcLoc,
+    parent: *ErrorMsg,
+    comptime format: []const u8,
+    args: anytype,
+) error{OutOfMemory}!void {
+    const msg = try std.fmt.allocPrint(mod.gpa, format, args);
+    errdefer mod.gpa.free(msg);
 
-    const b = try self.requireRuntimeBlock(scope, src);
-    const inst = try b.arena.create(Inst.VarPtr);
-    inst.* = .{
-        .base = .{
-            .tag = .varptr,
-            .ty = ty,
-            .src = src,
+    parent.notes = try mod.gpa.realloc(parent.notes, parent.notes.len + 1);
+    parent.notes[parent.notes.len - 1] = .{
+        .src_loc = .{
+            .file_scope = scope.getFileScope(),
+            .byte_offset = src,
         },
-        .variable = variable,
-    };
-    try b.instructions.append(self.gpa, &inst.base);
-    return &inst.base;
-}
-
-pub fn analyzeRef(mod: *Module, scope: *Scope, src: usize, operand: *Inst) InnerError!*Inst {
-    const ptr_type = try mod.simplePtrType(scope, src, operand.ty, false, .One);
-
-    if (operand.value()) |val| {
-        return mod.constInst(scope, src, .{
-            .ty = ptr_type,
-            .val = try Value.Tag.ref_val.create(scope.arena(), val),
-        });
-    }
-
-    const b = try mod.requireRuntimeBlock(scope, src);
-    return mod.addUnOp(b, src, ptr_type, .ref, operand);
-}
-
-pub fn analyzeDeref(self: *Module, scope: *Scope, src: usize, ptr: *Inst, ptr_src: usize) InnerError!*Inst {
-    const elem_ty = switch (ptr.ty.zigTypeTag()) {
-        .Pointer => ptr.ty.elemType(),
-        else => return self.fail(scope, ptr_src, "expected pointer, found '{}'", .{ptr.ty}),
-    };
-    if (ptr.value()) |val| {
-        return self.constInst(scope, src, .{
-            .ty = elem_ty,
-            .val = try val.pointerDeref(scope.arena()),
-        });
-    }
-
-    const b = try self.requireRuntimeBlock(scope, src);
-    return self.addUnOp(b, src, elem_ty, .load, ptr);
-}
-
-pub fn analyzeDeclRefByName(self: *Module, scope: *Scope, src: usize, decl_name: []const u8) InnerError!*Inst {
-    const decl = self.lookupDeclName(scope, decl_name) orelse
-        return self.fail(scope, src, "decl '{s}' not found", .{decl_name});
-    return self.analyzeDeclRef(scope, src, decl);
-}
-
-pub fn wantSafety(self: *Module, scope: *Scope) bool {
-    // TODO take into account scope's safety overrides
-    return switch (self.optimizeMode()) {
-        .Debug => true,
-        .ReleaseSafe => true,
-        .ReleaseFast => false,
-        .ReleaseSmall => false,
-    };
-}
-
-pub fn analyzeIsNull(
-    self: *Module,
-    scope: *Scope,
-    src: usize,
-    operand: *Inst,
-    invert_logic: bool,
-) InnerError!*Inst {
-    if (operand.value()) |opt_val| {
-        const is_null = opt_val.isNull();
-        const bool_value = if (invert_logic) !is_null else is_null;
-        return self.constBool(scope, src, bool_value);
-    }
-    const b = try self.requireRuntimeBlock(scope, src);
-    const inst_tag: Inst.Tag = if (invert_logic) .is_non_null else .is_null;
-    return self.addUnOp(b, src, Type.initTag(.bool), inst_tag, operand);
-}
-
-pub fn analyzeIsErr(self: *Module, scope: *Scope, src: usize, operand: *Inst) InnerError!*Inst {
-    const ot = operand.ty.zigTypeTag();
-    if (ot != .ErrorSet and ot != .ErrorUnion) return self.constBool(scope, src, false);
-    if (ot == .ErrorSet) return self.constBool(scope, src, true);
-    assert(ot == .ErrorUnion);
-    if (operand.value()) |err_union| {
-        return self.constBool(scope, src, err_union.getError() != null);
-    }
-    const b = try self.requireRuntimeBlock(scope, src);
-    return self.addUnOp(b, src, Type.initTag(.bool), .is_err, operand);
-}
-
-pub fn analyzeSlice(self: *Module, scope: *Scope, src: usize, array_ptr: *Inst, start: *Inst, end_opt: ?*Inst, sentinel_opt: ?*Inst) InnerError!*Inst {
-    const ptr_child = switch (array_ptr.ty.zigTypeTag()) {
-        .Pointer => array_ptr.ty.elemType(),
-        else => return self.fail(scope, src, "expected pointer, found '{}'", .{array_ptr.ty}),
-    };
-
-    var array_type = ptr_child;
-    const elem_type = switch (ptr_child.zigTypeTag()) {
-        .Array => ptr_child.elemType(),
-        .Pointer => blk: {
-            if (ptr_child.isSinglePointer()) {
-                if (ptr_child.elemType().zigTypeTag() == .Array) {
-                    array_type = ptr_child.elemType();
-                    break :blk ptr_child.elemType().elemType();
-                }
-
-                return self.fail(scope, src, "slice of single-item pointer", .{});
-            }
-            break :blk ptr_child.elemType();
-        },
-        else => return self.fail(scope, src, "slice of non-array type '{}'", .{ptr_child}),
-    };
-
-    const slice_sentinel = if (sentinel_opt) |sentinel| blk: {
-        const casted = try self.coerce(scope, elem_type, sentinel);
-        break :blk try self.resolveConstValue(scope, casted);
-    } else null;
-
-    var return_ptr_size: std.builtin.TypeInfo.Pointer.Size = .Slice;
-    var return_elem_type = elem_type;
-    if (end_opt) |end| {
-        if (end.value()) |end_val| {
-            if (start.value()) |start_val| {
-                const start_u64 = start_val.toUnsignedInt();
-                const end_u64 = end_val.toUnsignedInt();
-                if (start_u64 > end_u64) {
-                    return self.fail(scope, src, "out of bounds slice", .{});
-                }
-
-                const len = end_u64 - start_u64;
-                const array_sentinel = if (array_type.zigTypeTag() == .Array and end_u64 == array_type.arrayLen())
-                    array_type.sentinel()
-                else
-                    slice_sentinel;
-                return_elem_type = try self.arrayType(scope, len, array_sentinel, elem_type);
-                return_ptr_size = .One;
-            }
-        }
-    }
-    const return_type = try self.ptrType(
-        scope,
-        src,
-        return_elem_type,
-        if (end_opt == null) slice_sentinel else null,
-        0, // TODO alignment
-        0,
-        0,
-        !ptr_child.isConstPtr(),
-        ptr_child.isAllowzeroPtr(),
-        ptr_child.isVolatilePtr(),
-        return_ptr_size,
-    );
-
-    return self.fail(scope, src, "TODO implement analysis of slice", .{});
-}
-
-pub fn analyzeImport(self: *Module, scope: *Scope, src: usize, target_string: []const u8) !*Scope.File {
-    const cur_pkg = scope.getFileScope().pkg;
-    const cur_pkg_dir_path = cur_pkg.root_src_directory.path orelse ".";
-    const found_pkg = cur_pkg.table.get(target_string);
-
-    const resolved_path = if (found_pkg) |pkg|
-        try std.fs.path.resolve(self.gpa, &[_][]const u8{ pkg.root_src_directory.path orelse ".", pkg.root_src_path })
-    else
-        try std.fs.path.resolve(self.gpa, &[_][]const u8{ cur_pkg_dir_path, target_string });
-    errdefer self.gpa.free(resolved_path);
-
-    if (self.import_table.get(resolved_path)) |some| {
-        self.gpa.free(resolved_path);
-        return some;
-    }
-
-    if (found_pkg == null) {
-        const resolved_root_path = try std.fs.path.resolve(self.gpa, &[_][]const u8{cur_pkg_dir_path});
-        defer self.gpa.free(resolved_root_path);
-
-        if (!mem.startsWith(u8, resolved_path, resolved_root_path)) {
-            return error.ImportOutsidePkgPath;
-        }
-    }
-
-    // TODO Scope.Container arena for ty and sub_file_path
-    const file_scope = try self.gpa.create(Scope.File);
-    errdefer self.gpa.destroy(file_scope);
-    const struct_ty = try Type.Tag.empty_struct.create(self.gpa, &file_scope.root_container);
-    errdefer self.gpa.destroy(struct_ty.castTag(.empty_struct).?);
-
-    file_scope.* = .{
-        .sub_file_path = resolved_path,
-        .source = .{ .unloaded = {} },
-        .tree = undefined,
-        .status = .never_loaded,
-        .pkg = found_pkg orelse cur_pkg,
-        .root_container = .{
-            .file_scope = file_scope,
-            .decls = .{},
-            .ty = struct_ty,
-        },
-    };
-    self.analyzeContainer(&file_scope.root_container) catch |err| switch (err) {
-        error.AnalysisFail => {
-            assert(self.comp.totalErrorCount() != 0);
-        },
-        else => |e| return e,
-    };
-    try self.import_table.put(self.gpa, file_scope.sub_file_path, file_scope);
-    return file_scope;
-}
-
-/// Asserts that lhs and rhs types are both numeric.
-pub fn cmpNumeric(
-    self: *Module,
-    scope: *Scope,
-    src: usize,
-    lhs: *Inst,
-    rhs: *Inst,
-    op: std.math.CompareOperator,
-) InnerError!*Inst {
-    assert(lhs.ty.isNumeric());
-    assert(rhs.ty.isNumeric());
-
-    const lhs_ty_tag = lhs.ty.zigTypeTag();
-    const rhs_ty_tag = rhs.ty.zigTypeTag();
-
-    if (lhs_ty_tag == .Vector and rhs_ty_tag == .Vector) {
-        if (lhs.ty.arrayLen() != rhs.ty.arrayLen()) {
-            return self.fail(scope, src, "vector length mismatch: {d} and {d}", .{
-                lhs.ty.arrayLen(),
-                rhs.ty.arrayLen(),
-            });
-        }
-        return self.fail(scope, src, "TODO implement support for vectors in cmpNumeric", .{});
-    } else if (lhs_ty_tag == .Vector or rhs_ty_tag == .Vector) {
-        return self.fail(scope, src, "mixed scalar and vector operands to comparison operator: '{}' and '{}'", .{
-            lhs.ty,
-            rhs.ty,
-        });
-    }
-
-    if (lhs.value()) |lhs_val| {
-        if (rhs.value()) |rhs_val| {
-            return self.constBool(scope, src, Value.compare(lhs_val, op, rhs_val));
-        }
-    }
-
-    // TODO handle comparisons against lazy zero values
-    // Some values can be compared against zero without being runtime known or without forcing
-    // a full resolution of their value, for example `@sizeOf(@Frame(function))` is known to
-    // always be nonzero, and we benefit from not forcing the full evaluation and stack frame layout
-    // of this function if we don't need to.
-
-    // It must be a runtime comparison.
-    const b = try self.requireRuntimeBlock(scope, src);
-    // For floats, emit a float comparison instruction.
-    const lhs_is_float = switch (lhs_ty_tag) {
-        .Float, .ComptimeFloat => true,
-        else => false,
-    };
-    const rhs_is_float = switch (rhs_ty_tag) {
-        .Float, .ComptimeFloat => true,
-        else => false,
-    };
-    if (lhs_is_float and rhs_is_float) {
-        // Implicit cast the smaller one to the larger one.
-        const dest_type = x: {
-            if (lhs_ty_tag == .ComptimeFloat) {
-                break :x rhs.ty;
-            } else if (rhs_ty_tag == .ComptimeFloat) {
-                break :x lhs.ty;
-            }
-            if (lhs.ty.floatBits(self.getTarget()) >= rhs.ty.floatBits(self.getTarget())) {
-                break :x lhs.ty;
-            } else {
-                break :x rhs.ty;
-            }
-        };
-        const casted_lhs = try self.coerce(scope, dest_type, lhs);
-        const casted_rhs = try self.coerce(scope, dest_type, rhs);
-        return self.addBinOp(b, src, dest_type, Inst.Tag.fromCmpOp(op), casted_lhs, casted_rhs);
-    }
-    // For mixed unsigned integer sizes, implicit cast both operands to the larger integer.
-    // For mixed signed and unsigned integers, implicit cast both operands to a signed
-    // integer with + 1 bit.
-    // For mixed floats and integers, extract the integer part from the float, cast that to
-    // a signed integer with mantissa bits + 1, and if there was any non-integral part of the float,
-    // add/subtract 1.
-    const lhs_is_signed = if (lhs.value()) |lhs_val|
-        lhs_val.compareWithZero(.lt)
-    else
-        (lhs.ty.isFloat() or lhs.ty.isSignedInt());
-    const rhs_is_signed = if (rhs.value()) |rhs_val|
-        rhs_val.compareWithZero(.lt)
-    else
-        (rhs.ty.isFloat() or rhs.ty.isSignedInt());
-    const dest_int_is_signed = lhs_is_signed or rhs_is_signed;
-
-    var dest_float_type: ?Type = null;
-
-    var lhs_bits: usize = undefined;
-    if (lhs.value()) |lhs_val| {
-        if (lhs_val.isUndef())
-            return self.constUndef(scope, src, Type.initTag(.bool));
-        const is_unsigned = if (lhs_is_float) x: {
-            var bigint_space: Value.BigIntSpace = undefined;
-            var bigint = try lhs_val.toBigInt(&bigint_space).toManaged(self.gpa);
-            defer bigint.deinit();
-            const zcmp = lhs_val.orderAgainstZero();
-            if (lhs_val.floatHasFraction()) {
-                switch (op) {
-                    .eq => return self.constBool(scope, src, false),
-                    .neq => return self.constBool(scope, src, true),
-                    else => {},
-                }
-                if (zcmp == .lt) {
-                    try bigint.addScalar(bigint.toConst(), -1);
-                } else {
-                    try bigint.addScalar(bigint.toConst(), 1);
-                }
-            }
-            lhs_bits = bigint.toConst().bitCountTwosComp();
-            break :x (zcmp != .lt);
-        } else x: {
-            lhs_bits = lhs_val.intBitCountTwosComp();
-            break :x (lhs_val.orderAgainstZero() != .lt);
-        };
-        lhs_bits += @boolToInt(is_unsigned and dest_int_is_signed);
-    } else if (lhs_is_float) {
-        dest_float_type = lhs.ty;
-    } else {
-        const int_info = lhs.ty.intInfo(self.getTarget());
-        lhs_bits = int_info.bits + @boolToInt(int_info.signedness == .unsigned and dest_int_is_signed);
-    }
-
-    var rhs_bits: usize = undefined;
-    if (rhs.value()) |rhs_val| {
-        if (rhs_val.isUndef())
-            return self.constUndef(scope, src, Type.initTag(.bool));
-        const is_unsigned = if (rhs_is_float) x: {
-            var bigint_space: Value.BigIntSpace = undefined;
-            var bigint = try rhs_val.toBigInt(&bigint_space).toManaged(self.gpa);
-            defer bigint.deinit();
-            const zcmp = rhs_val.orderAgainstZero();
-            if (rhs_val.floatHasFraction()) {
-                switch (op) {
-                    .eq => return self.constBool(scope, src, false),
-                    .neq => return self.constBool(scope, src, true),
-                    else => {},
-                }
-                if (zcmp == .lt) {
-                    try bigint.addScalar(bigint.toConst(), -1);
-                } else {
-                    try bigint.addScalar(bigint.toConst(), 1);
-                }
-            }
-            rhs_bits = bigint.toConst().bitCountTwosComp();
-            break :x (zcmp != .lt);
-        } else x: {
-            rhs_bits = rhs_val.intBitCountTwosComp();
-            break :x (rhs_val.orderAgainstZero() != .lt);
-        };
-        rhs_bits += @boolToInt(is_unsigned and dest_int_is_signed);
-    } else if (rhs_is_float) {
-        dest_float_type = rhs.ty;
-    } else {
-        const int_info = rhs.ty.intInfo(self.getTarget());
-        rhs_bits = int_info.bits + @boolToInt(int_info.signedness == .unsigned and dest_int_is_signed);
-    }
-
-    const dest_type = if (dest_float_type) |ft| ft else blk: {
-        const max_bits = std.math.max(lhs_bits, rhs_bits);
-        const casted_bits = std.math.cast(u16, max_bits) catch |err| switch (err) {
-            error.Overflow => return self.fail(scope, src, "{d} exceeds maximum integer bit count", .{max_bits}),
-        };
-        break :blk try self.makeIntType(scope, dest_int_is_signed, casted_bits);
-    };
-    const casted_lhs = try self.coerce(scope, dest_type, lhs);
-    const casted_rhs = try self.coerce(scope, dest_type, rhs);
-
-    return self.addBinOp(b, src, Type.initTag(.bool), Inst.Tag.fromCmpOp(op), casted_lhs, casted_rhs);
-}
-
-fn wrapOptional(self: *Module, scope: *Scope, dest_type: Type, inst: *Inst) !*Inst {
-    if (inst.value()) |val| {
-        return self.constInst(scope, inst.src, .{ .ty = dest_type, .val = val });
-    }
-
-    const b = try self.requireRuntimeBlock(scope, inst.src);
-    return self.addUnOp(b, inst.src, dest_type, .wrap_optional, inst);
-}
-
-fn wrapErrorUnion(self: *Module, scope: *Scope, dest_type: Type, inst: *Inst) !*Inst {
-    // TODO deal with inferred error sets
-    const err_union = dest_type.castTag(.error_union).?;
-    if (inst.value()) |val| {
-        const to_wrap = if (inst.ty.zigTypeTag() != .ErrorSet) blk: {
-            _ = try self.coerce(scope, err_union.data.payload, inst);
-            break :blk val;
-        } else switch (err_union.data.error_set.tag()) {
-            .anyerror => val,
-            .error_set_single => blk: {
-                const n = err_union.data.error_set.castTag(.error_set_single).?.data;
-                if (!mem.eql(u8, val.castTag(.@"error").?.data.name, n))
-                    return self.fail(scope, inst.src, "expected type '{}', found type '{}'", .{ err_union.data.error_set, inst.ty });
-                break :blk val;
-            },
-            .error_set => blk: {
-                const f = err_union.data.error_set.castTag(.error_set).?.data.typed_value.most_recent.typed_value.val.castTag(.error_set).?.data.fields;
-                if (f.get(val.castTag(.@"error").?.data.name) == null)
-                    return self.fail(scope, inst.src, "expected type '{}', found type '{}'", .{ err_union.data.error_set, inst.ty });
-                break :blk val;
-            },
-            else => unreachable,
-        };
-
-        return self.constInst(scope, inst.src, .{
-            .ty = dest_type,
-            // creating a SubValue for the error_union payload
-            .val = try Value.Tag.error_union.create(
-                scope.arena(),
-                to_wrap,
-            ),
-        });
-    }
-
-    const b = try self.requireRuntimeBlock(scope, inst.src);
-
-    // we are coercing from E to E!T
-    if (inst.ty.zigTypeTag() == .ErrorSet) {
-        var coerced = try self.coerce(scope, err_union.data.error_set, inst);
-        return self.addUnOp(b, inst.src, dest_type, .wrap_errunion_err, coerced);
-    } else {
-        var coerced = try self.coerce(scope, err_union.data.payload, inst);
-        return self.addUnOp(b, inst.src, dest_type, .wrap_errunion_payload, coerced);
-    }
-}
-
-fn makeIntType(self: *Module, scope: *Scope, signed: bool, bits: u16) !Type {
-    const int_payload = try scope.arena().create(Type.Payload.Bits);
-    int_payload.* = .{
-        .base = .{
-            .tag = if (signed) .int_signed else .int_unsigned,
-        },
-        .data = bits,
-    };
-    return Type.initPayload(&int_payload.base);
-}
-
-pub fn resolvePeerTypes(self: *Module, scope: *Scope, instructions: []*Inst) !Type {
-    if (instructions.len == 0)
-        return Type.initTag(.noreturn);
-
-    if (instructions.len == 1)
-        return instructions[0].ty;
-
-    var chosen = instructions[0];
-    for (instructions[1..]) |candidate| {
-        if (candidate.ty.eql(chosen.ty))
-            continue;
-        if (candidate.ty.zigTypeTag() == .NoReturn)
-            continue;
-        if (chosen.ty.zigTypeTag() == .NoReturn) {
-            chosen = candidate;
-            continue;
-        }
-        if (candidate.ty.zigTypeTag() == .Undefined)
-            continue;
-        if (chosen.ty.zigTypeTag() == .Undefined) {
-            chosen = candidate;
-            continue;
-        }
-        if (chosen.ty.isInt() and
-            candidate.ty.isInt() and
-            chosen.ty.isSignedInt() == candidate.ty.isSignedInt())
-        {
-            if (chosen.ty.intInfo(self.getTarget()).bits < candidate.ty.intInfo(self.getTarget()).bits) {
-                chosen = candidate;
-            }
-            continue;
-        }
-        if (chosen.ty.isFloat() and candidate.ty.isFloat()) {
-            if (chosen.ty.floatBits(self.getTarget()) < candidate.ty.floatBits(self.getTarget())) {
-                chosen = candidate;
-            }
-            continue;
-        }
-
-        if (chosen.ty.zigTypeTag() == .ComptimeInt and candidate.ty.isInt()) {
-            chosen = candidate;
-            continue;
-        }
-
-        if (chosen.ty.isInt() and candidate.ty.zigTypeTag() == .ComptimeInt) {
-            continue;
-        }
-
-        // TODO error notes pointing out each type
-        return self.fail(scope, candidate.src, "incompatible types: '{}' and '{}'", .{ chosen.ty, candidate.ty });
-    }
-
-    return chosen.ty;
-}
-
-pub fn coerce(self: *Module, scope: *Scope, dest_type: Type, inst: *Inst) InnerError!*Inst {
-    if (dest_type.tag() == .var_args_param) {
-        return self.coerceVarArgParam(scope, inst);
-    }
-    // If the types are the same, we can return the operand.
-    if (dest_type.eql(inst.ty))
-        return inst;
-
-    const in_memory_result = coerceInMemoryAllowed(dest_type, inst.ty);
-    if (in_memory_result == .ok) {
-        return self.bitcast(scope, dest_type, inst);
-    }
-
-    // undefined to anything
-    if (inst.value()) |val| {
-        if (val.isUndef() or inst.ty.zigTypeTag() == .Undefined) {
-            return self.constInst(scope, inst.src, .{ .ty = dest_type, .val = val });
-        }
-    }
-    assert(inst.ty.zigTypeTag() != .Undefined);
-
-    // null to ?T
-    if (dest_type.zigTypeTag() == .Optional and inst.ty.zigTypeTag() == .Null) {
-        return self.constInst(scope, inst.src, .{ .ty = dest_type, .val = Value.initTag(.null_value) });
-    }
-
-    // T to ?T
-    if (dest_type.zigTypeTag() == .Optional) {
-        var buf: Type.Payload.ElemType = undefined;
-        const child_type = dest_type.optionalChild(&buf);
-        if (child_type.eql(inst.ty)) {
-            return self.wrapOptional(scope, dest_type, inst);
-        } else if (try self.coerceNum(scope, child_type, inst)) |some| {
-            return self.wrapOptional(scope, dest_type, some);
-        }
-    }
-
-    // T to E!T or E to E!T
-    if (dest_type.tag() == .error_union) {
-        return try self.wrapErrorUnion(scope, dest_type, inst);
-    }
-
-    // Coercions where the source is a single pointer to an array.
-    src_array_ptr: {
-        if (!inst.ty.isSinglePointer()) break :src_array_ptr;
-        const array_type = inst.ty.elemType();
-        if (array_type.zigTypeTag() != .Array) break :src_array_ptr;
-        const array_elem_type = array_type.elemType();
-        if (inst.ty.isConstPtr() and !dest_type.isConstPtr()) break :src_array_ptr;
-        if (inst.ty.isVolatilePtr() and !dest_type.isVolatilePtr()) break :src_array_ptr;
-
-        const dst_elem_type = dest_type.elemType();
-        switch (coerceInMemoryAllowed(dst_elem_type, array_elem_type)) {
-            .ok => {},
-            .no_match => break :src_array_ptr,
-        }
-
-        switch (dest_type.ptrSize()) {
-            .Slice => {
-                // *[N]T to []T
-                return self.coerceArrayPtrToSlice(scope, dest_type, inst);
-            },
-            .C => {
-                // *[N]T to [*c]T
-                return self.coerceArrayPtrToMany(scope, dest_type, inst);
-            },
-            .Many => {
-                // *[N]T to [*]T
-                // *[N:s]T to [*:s]T
-                const src_sentinel = array_type.sentinel();
-                const dst_sentinel = dest_type.sentinel();
-                if (src_sentinel == null and dst_sentinel == null)
-                    return self.coerceArrayPtrToMany(scope, dest_type, inst);
-
-                if (src_sentinel) |src_s| {
-                    if (dst_sentinel) |dst_s| {
-                        if (src_s.eql(dst_s)) {
-                            return self.coerceArrayPtrToMany(scope, dest_type, inst);
-                        }
-                    }
-                }
-            },
-            .One => {},
-        }
-    }
-
-    // comptime known number to other number
-    if (try self.coerceNum(scope, dest_type, inst)) |some|
-        return some;
-
-    // integer widening
-    if (inst.ty.zigTypeTag() == .Int and dest_type.zigTypeTag() == .Int) {
-        assert(inst.value() == null); // handled above
-
-        const src_info = inst.ty.intInfo(self.getTarget());
-        const dst_info = dest_type.intInfo(self.getTarget());
-        if ((src_info.signedness == dst_info.signedness and dst_info.bits >= src_info.bits) or
-            // small enough unsigned ints can get casted to large enough signed ints
-            (src_info.signedness == .signed and dst_info.signedness == .unsigned and dst_info.bits > src_info.bits))
-        {
-            const b = try self.requireRuntimeBlock(scope, inst.src);
-            return self.addUnOp(b, inst.src, dest_type, .intcast, inst);
-        }
-    }
-
-    // float widening
-    if (inst.ty.zigTypeTag() == .Float and dest_type.zigTypeTag() == .Float) {
-        assert(inst.value() == null); // handled above
-
-        const src_bits = inst.ty.floatBits(self.getTarget());
-        const dst_bits = dest_type.floatBits(self.getTarget());
-        if (dst_bits >= src_bits) {
-            const b = try self.requireRuntimeBlock(scope, inst.src);
-            return self.addUnOp(b, inst.src, dest_type, .floatcast, inst);
-        }
-    }
-
-    return self.fail(scope, inst.src, "expected {}, found {}", .{ dest_type, inst.ty });
-}
-
-pub fn coerceNum(self: *Module, scope: *Scope, dest_type: Type, inst: *Inst) InnerError!?*Inst {
-    const val = inst.value() orelse return null;
-    const src_zig_tag = inst.ty.zigTypeTag();
-    const dst_zig_tag = dest_type.zigTypeTag();
-
-    if (dst_zig_tag == .ComptimeInt or dst_zig_tag == .Int) {
-        if (src_zig_tag == .Float or src_zig_tag == .ComptimeFloat) {
-            if (val.floatHasFraction()) {
-                return self.fail(scope, inst.src, "fractional component prevents float value {} from being casted to type '{}'", .{ val, inst.ty });
-            }
-            return self.fail(scope, inst.src, "TODO float to int", .{});
-        } else if (src_zig_tag == .Int or src_zig_tag == .ComptimeInt) {
-            if (!val.intFitsInType(dest_type, self.getTarget())) {
-                return self.fail(scope, inst.src, "type {} cannot represent integer value {}", .{ inst.ty, val });
-            }
-            return self.constInst(scope, inst.src, .{ .ty = dest_type, .val = val });
-        }
-    } else if (dst_zig_tag == .ComptimeFloat or dst_zig_tag == .Float) {
-        if (src_zig_tag == .Float or src_zig_tag == .ComptimeFloat) {
-            const res = val.floatCast(scope.arena(), dest_type, self.getTarget()) catch |err| switch (err) {
-                error.Overflow => return self.fail(
-                    scope,
-                    inst.src,
-                    "cast of value {} to type '{}' loses information",
-                    .{ val, dest_type },
-                ),
-                error.OutOfMemory => return error.OutOfMemory,
-            };
-            return self.constInst(scope, inst.src, .{ .ty = dest_type, .val = res });
-        } else if (src_zig_tag == .Int or src_zig_tag == .ComptimeInt) {
-            return self.fail(scope, inst.src, "TODO int to float", .{});
-        }
-    }
-    return null;
-}
-
-pub fn coerceVarArgParam(mod: *Module, scope: *Scope, inst: *Inst) !*Inst {
-    switch (inst.ty.zigTypeTag()) {
-        .ComptimeInt, .ComptimeFloat => return mod.fail(scope, inst.src, "integer and float literals in var args function must be casted", .{}),
-        else => {},
-    }
-    // TODO implement more of this function.
-    return inst;
-}
-
-pub fn storePtr(self: *Module, scope: *Scope, src: usize, ptr: *Inst, uncasted_value: *Inst) !*Inst {
-    if (ptr.ty.isConstPtr())
-        return self.fail(scope, src, "cannot assign to constant", .{});
-
-    const elem_ty = ptr.ty.elemType();
-    const value = try self.coerce(scope, elem_ty, uncasted_value);
-    if (elem_ty.onePossibleValue() != null)
-        return self.constVoid(scope, src);
-
-    // TODO handle comptime pointer writes
-    // TODO handle if the element type requires comptime
-
-    const b = try self.requireRuntimeBlock(scope, src);
-    return self.addBinOp(b, src, Type.initTag(.void), .store, ptr, value);
-}
-
-pub fn bitcast(self: *Module, scope: *Scope, dest_type: Type, inst: *Inst) !*Inst {
-    if (inst.value()) |val| {
-        // Keep the comptime Value representation; take the new type.
-        return self.constInst(scope, inst.src, .{ .ty = dest_type, .val = val });
-    }
-    // TODO validate the type size and other compile errors
-    const b = try self.requireRuntimeBlock(scope, inst.src);
-    return self.addUnOp(b, inst.src, dest_type, .bitcast, inst);
-}
-
-fn coerceArrayPtrToSlice(self: *Module, scope: *Scope, dest_type: Type, inst: *Inst) !*Inst {
-    if (inst.value()) |val| {
-        // The comptime Value representation is compatible with both types.
-        return self.constInst(scope, inst.src, .{ .ty = dest_type, .val = val });
-    }
-    return self.fail(scope, inst.src, "TODO implement coerceArrayPtrToSlice runtime instruction", .{});
-}
-
-fn coerceArrayPtrToMany(self: *Module, scope: *Scope, dest_type: Type, inst: *Inst) !*Inst {
-    if (inst.value()) |val| {
-        // The comptime Value representation is compatible with both types.
-        return self.constInst(scope, inst.src, .{ .ty = dest_type, .val = val });
-    }
-    return self.fail(scope, inst.src, "TODO implement coerceArrayPtrToMany runtime instruction", .{});
-}
-
-/// We don't return a pointer to the new error note because the pointer
-/// becomes invalid when you add another one.
-pub fn errNote(
-    mod: *Module,
-    scope: *Scope,
-    src: usize,
-    parent: *ErrorMsg,
-    comptime format: []const u8,
-    args: anytype,
-) error{OutOfMemory}!void {
-    const msg = try std.fmt.allocPrint(mod.gpa, format, args);
-    errdefer mod.gpa.free(msg);
-
-    parent.notes = try mod.gpa.realloc(parent.notes, parent.notes.len + 1);
-    parent.notes[parent.notes.len - 1] = .{
-        .src_loc = .{
-            .file_scope = scope.getFileScope(),
-            .byte_offset = src,
-        },
-        .msg = msg,
+        .msg = msg,
     };
 }
 
 pub fn errMsg(
     mod: *Module,
     scope: *Scope,
-    src_byte_offset: usize,
+    src: LazySrcLoc,
     comptime format: []const u8,
     args: anytype,
 ) error{OutOfMemory}!*ErrorMsg {
     return ErrorMsg.create(mod.gpa, .{
-        .file_scope = scope.getFileScope(),
-        .byte_offset = src_byte_offset,
+        .decl = scope.srcDecl().?,
+        .lazy = src,
     }, format, args);
 }
 
 pub fn fail(
     mod: *Module,
     scope: *Scope,
-    src_byte_offset: usize,
+    src: LazySrcLoc,
     comptime format: []const u8,
     args: anytype,
 ) InnerError {
-    const err_msg = try mod.errMsg(scope, src_byte_offset, format, args);
+    const err_msg = try mod.errMsg(scope, src, format, args);
     return mod.failWithOwnedErrorMsg(scope, err_msg);
 }
 
+/// Same as `fail`, except given an absolute byte offset, and the function sets up the `LazySrcLoc`
+/// for pointing at it relatively by subtracting from the containing `Decl`.
+pub fn failOff(
+    mod: *Module,
+    scope: *Scope,
+    byte_offset: u32,
+    comptime format: []const u8,
+    args: anytype,
+) InnerError {
+    const decl_byte_offset = scope.srcDecl().?.srcByteOffset();
+    const src: LazySrcLoc = .{ .byte_offset = byte_offset - decl_byte_offset };
+    return mod.fail(scope, src, format, args);
+}
+
+/// Same as `fail`, except given a token index, and the function sets up the `LazySrcLoc`
+/// for pointing at it relatively by subtracting from the containing `Decl`.
 pub fn failTok(
-    self: *Module,
+    mod: *Module,
     scope: *Scope,
     token_index: ast.TokenIndex,
     comptime format: []const u8,
     args: anytype,
 ) InnerError {
-    const src = scope.tree().tokens.items(.start)[token_index];
-    return self.fail(scope, src, format, args);
+    const decl_token = scope.srcDecl().?.srcToken();
+    const src: LazySrcLoc = .{ .token_offset = token_index - decl_token };
+    return mod.fail(scope, src, format, args);
 }
 
+/// Same as `fail`, except given an AST node index, and the function sets up the `LazySrcLoc`
+/// for pointing at it relatively by subtracting from the containing `Decl`.
 pub fn failNode(
-    self: *Module,
+    mod: *Module,
     scope: *Scope,
-    ast_node: ast.Node.Index,
+    node_index: ast.Node.Index,
     comptime format: []const u8,
     args: anytype,
 ) InnerError {
-    const tree = scope.tree();
-    const src = tree.tokens.items(.start)[tree.firstToken(ast_node)];
-    return self.fail(scope, src, format, args);
+    const decl_node = scope.srcDecl().?.srcNode();
+    const src: LazySrcLoc = .{ .node_offset = node_index - decl_node };
+    return mod.fail(scope, src, format, args);
 }
 
-pub fn failWithOwnedErrorMsg(self: *Module, scope: *Scope, err_msg: *ErrorMsg) InnerError {
+pub fn failWithOwnedErrorMsg(mod: *Module, scope: *Scope, err_msg: *ErrorMsg) InnerError {
     @setCold(true);
     {
-        errdefer err_msg.destroy(self.gpa);
-        try self.failed_decls.ensureCapacity(self.gpa, self.failed_decls.items().len + 1);
-        try self.failed_files.ensureCapacity(self.gpa, self.failed_files.items().len + 1);
+        errdefer err_msg.destroy(mod.gpa);
+        try mod.failed_decls.ensureCapacity(mod.gpa, mod.failed_decls.items().len + 1);
+        try mod.failed_files.ensureCapacity(mod.gpa, mod.failed_files.items().len + 1);
     }
     switch (scope.tag) {
         .block => {
@@ -3675,41 +3190,41 @@ pub fn failWithOwnedErrorMsg(self: *Module, scope: *Scope, err_msg: *ErrorMsg) I
                     func.state = .sema_failure;
                 } else {
                     block.owner_decl.analysis = .sema_failure;
-                    block.owner_decl.generation = self.generation;
+                    block.owner_decl.generation = mod.generation;
                 }
             } else {
                 if (block.func) |func| {
                     func.state = .sema_failure;
                 } else {
                     block.owner_decl.analysis = .sema_failure;
-                    block.owner_decl.generation = self.generation;
+                    block.owner_decl.generation = mod.generation;
                 }
             }
-            self.failed_decls.putAssumeCapacityNoClobber(block.owner_decl, err_msg);
+            mod.failed_decls.putAssumeCapacityNoClobber(block.owner_decl, err_msg);
         },
         .gen_zir, .gen_suspend => {
-            const gen_zir = scope.cast(Scope.GenZIR).?;
+            const gen_zir = scope.cast(Scope.GenZir).?;
             gen_zir.decl.analysis = .sema_failure;
-            gen_zir.decl.generation = self.generation;
-            self.failed_decls.putAssumeCapacityNoClobber(gen_zir.decl, err_msg);
+            gen_zir.decl.generation = mod.generation;
+            mod.failed_decls.putAssumeCapacityNoClobber(gen_zir.decl, err_msg);
         },
         .local_val => {
             const gen_zir = scope.cast(Scope.LocalVal).?.gen_zir;
             gen_zir.decl.analysis = .sema_failure;
-            gen_zir.decl.generation = self.generation;
-            self.failed_decls.putAssumeCapacityNoClobber(gen_zir.decl, err_msg);
+            gen_zir.decl.generation = mod.generation;
+            mod.failed_decls.putAssumeCapacityNoClobber(gen_zir.decl, err_msg);
         },
         .local_ptr => {
             const gen_zir = scope.cast(Scope.LocalPtr).?.gen_zir;
             gen_zir.decl.analysis = .sema_failure;
-            gen_zir.decl.generation = self.generation;
-            self.failed_decls.putAssumeCapacityNoClobber(gen_zir.decl, err_msg);
+            gen_zir.decl.generation = mod.generation;
+            mod.failed_decls.putAssumeCapacityNoClobber(gen_zir.decl, err_msg);
         },
         .gen_nosuspend => {
             const gen_zir = scope.cast(Scope.Nosuspend).?.gen_zir;
             gen_zir.decl.analysis = .sema_failure;
-            gen_zir.decl.generation = self.generation;
-            self.failed_decls.putAssumeCapacityNoClobber(gen_zir.decl, err_msg);
+            gen_zir.decl.generation = mod.generation;
+            mod.failed_decls.putAssumeCapacityNoClobber(gen_zir.decl, err_msg);
         },
         .file => unreachable,
         .container => unreachable,
@@ -3717,20 +3232,6 @@ pub fn failWithOwnedErrorMsg(self: *Module, scope: *Scope, err_msg: *ErrorMsg) I
     return error.AnalysisFail;
 }
 
-const InMemoryCoercionResult = enum {
-    ok,
-    no_match,
-};
-
-fn coerceInMemoryAllowed(dest_type: Type, src_type: Type) InMemoryCoercionResult {
-    if (dest_type.eql(src_type))
-        return .ok;
-
-    // TODO: implement more of this function
-
-    return .no_match;
-}
-
 fn srcHashEql(a: std.zig.SrcHash, b: std.zig.SrcHash) bool {
     return @bitCast(u128, a) == @bitCast(u128, b);
 }
@@ -3780,10 +3281,10 @@ pub fn intSub(allocator: *Allocator, lhs: Value, rhs: Value) !Value {
 }
 
 pub fn floatAdd(
-    self: *Module,
+    mod: *Module,
     scope: *Scope,
     float_type: Type,
-    src: usize,
+    src: LazySrcLoc,
     lhs: Value,
     rhs: Value,
 ) !Value {
@@ -3815,10 +3316,10 @@ pub fn floatAdd(
 }
 
 pub fn floatSub(
-    self: *Module,
+    mod: *Module,
     scope: *Scope,
     float_type: Type,
-    src: usize,
+    src: LazySrcLoc,
     lhs: Value,
     rhs: Value,
 ) !Value {
@@ -3850,9 +3351,8 @@ pub fn floatSub(
 }
 
 pub fn simplePtrType(
-    self: *Module,
-    scope: *Scope,
-    src: usize,
+    mod: *Module,
+    arena: *Allocator,
     elem_ty: Type,
     mutable: bool,
     size: std.builtin.TypeInfo.Pointer.Size,
@@ -3863,7 +3363,7 @@ pub fn simplePtrType(
     // TODO stage1 type inference bug
     const T = Type.Tag;
 
-    const type_payload = try scope.arena().create(Type.Payload.ElemType);
+    const type_payload = try arena.create(Type.Payload.ElemType);
     type_payload.* = .{
         .base = .{
             .tag = switch (size) {
@@ -3879,9 +3379,8 @@ pub fn simplePtrType(
 }
 
 pub fn ptrType(
-    self: *Module,
-    scope: *Scope,
-    src: usize,
+    mod: *Module,
+    arena: *Allocator,
     elem_ty: Type,
     sentinel: ?Value,
     @"align": u32,
@@ -3895,7 +3394,7 @@ pub fn ptrType(
     assert(host_size == 0 or bit_offset < host_size * 8);
 
     // TODO check if type can be represented by simplePtrType
-    return Type.Tag.pointer.create(scope.arena(), .{
+    return Type.Tag.pointer.create(arena, .{
         .pointee_type = elem_ty,
         .sentinel = sentinel,
         .@"align" = @"align",
@@ -3908,23 +3407,23 @@ pub fn ptrType(
     });
 }
 
-pub fn optionalType(self: *Module, scope: *Scope, child_type: Type) Allocator.Error!Type {
+pub fn optionalType(mod: *Module, arena: *Allocator, child_type: Type) Allocator.Error!Type {
     switch (child_type.tag()) {
         .single_const_pointer => return Type.Tag.optional_single_const_pointer.create(
-            scope.arena(),
+            arena,
             child_type.elemType(),
         ),
         .single_mut_pointer => return Type.Tag.optional_single_mut_pointer.create(
-            scope.arena(),
+            arena,
             child_type.elemType(),
         ),
-        else => return Type.Tag.optional.create(scope.arena(), child_type),
+        else => return Type.Tag.optional.create(arena, child_type),
     }
 }
 
 pub fn arrayType(
-    self: *Module,
-    scope: *Scope,
+    mod: *Module,
+    arena: *Allocator,
     len: u64,
     sentinel: ?Value,
     elem_type: Type,
@@ -3932,30 +3431,30 @@ pub fn arrayType(
     if (elem_type.eql(Type.initTag(.u8))) {
         if (sentinel) |some| {
             if (some.eql(Value.initTag(.zero))) {
-                return Type.Tag.array_u8_sentinel_0.create(scope.arena(), len);
+                return Type.Tag.array_u8_sentinel_0.create(arena, len);
             }
         } else {
-            return Type.Tag.array_u8.create(scope.arena(), len);
+            return Type.Tag.array_u8.create(arena, len);
         }
     }
 
     if (sentinel) |some| {
-        return Type.Tag.array_sentinel.create(scope.arena(), .{
+        return Type.Tag.array_sentinel.create(arena, .{
             .len = len,
             .sentinel = some,
             .elem_type = elem_type,
         });
     }
 
-    return Type.Tag.array.create(scope.arena(), .{
+    return Type.Tag.array.create(arena, .{
         .len = len,
         .elem_type = elem_type,
     });
 }
 
 pub fn errorUnionType(
-    self: *Module,
-    scope: *Scope,
+    mod: *Module,
+    arena: *Allocator,
     error_set: Type,
     payload: Type,
 ) Allocator.Error!Type {
@@ -3964,19 +3463,19 @@ pub fn errorUnionType(
         return Type.initTag(.anyerror_void_error_union);
     }
 
-    return Type.Tag.error_union.create(scope.arena(), .{
+    return Type.Tag.error_union.create(arena, .{
         .error_set = error_set,
         .payload = payload,
     });
 }
 
-pub fn anyframeType(self: *Module, scope: *Scope, return_type: Type) Allocator.Error!Type {
-    return Type.Tag.anyframe_T.create(scope.arena(), return_type);
+pub fn anyframeType(mod: *Module, arena: *Allocator, return_type: Type) Allocator.Error!Type {
+    return Type.Tag.anyframe_T.create(arena, return_type);
 }
 
-pub fn dumpInst(self: *Module, scope: *Scope, inst: *Inst) void {
+pub fn dumpInst(mod: *Module, scope: *Scope, inst: *ir.Inst) void {
     const zir_module = scope.namespace();
-    const source = zir_module.getSource(self) catch @panic("dumpInst failed to get source");
+    const source = zir_module.getSource(mod) catch @panic("dumpInst failed to get source");
     const loc = std.zig.findLineColumn(source, inst.src);
     if (inst.tag == .constant) {
         std.debug.print("constant ty={} val={} src={s}:{d}:{d}\n", .{
@@ -4006,267 +3505,113 @@ pub fn dumpInst(self: *Module, scope: *Scope, inst: *Inst) void {
     }
 }
 
-pub const PanicId = enum {
-    unreach,
-    unwrap_null,
-    unwrap_errunion,
-};
-
-pub fn addSafetyCheck(mod: *Module, parent_block: *Scope.Block, ok: *Inst, panic_id: PanicId) !void {
-    const block_inst = try parent_block.arena.create(Inst.Block);
-    block_inst.* = .{
-        .base = .{
-            .tag = Inst.Block.base_tag,
-            .ty = Type.initTag(.void),
-            .src = ok.src,
-        },
-        .body = .{
-            .instructions = try parent_block.arena.alloc(*Inst, 1), // Only need space for the condbr.
-        },
-    };
-
-    const ok_body: ir.Body = .{
-        .instructions = try parent_block.arena.alloc(*Inst, 1), // Only need space for the br_void.
-    };
-    const br_void = try parent_block.arena.create(Inst.BrVoid);
-    br_void.* = .{
-        .base = .{
-            .tag = .br_void,
-            .ty = Type.initTag(.noreturn),
-            .src = ok.src,
-        },
-        .block = block_inst,
-    };
-    ok_body.instructions[0] = &br_void.base;
-
-    var fail_block: Scope.Block = .{
-        .parent = parent_block,
-        .inst_table = parent_block.inst_table,
-        .func = parent_block.func,
-        .owner_decl = parent_block.owner_decl,
-        .src_decl = parent_block.src_decl,
-        .instructions = .{},
-        .arena = parent_block.arena,
-        .inlining = parent_block.inlining,
-        .is_comptime = parent_block.is_comptime,
-        .branch_quota = parent_block.branch_quota,
-    };
-
-    defer fail_block.instructions.deinit(mod.gpa);
-
-    _ = try mod.safetyPanic(&fail_block, ok.src, panic_id);
-
-    const fail_body: ir.Body = .{ .instructions = try parent_block.arena.dupe(*Inst, fail_block.instructions.items) };
-
-    const condbr = try parent_block.arena.create(Inst.CondBr);
-    condbr.* = .{
-        .base = .{
-            .tag = .condbr,
-            .ty = Type.initTag(.noreturn),
-            .src = ok.src,
-        },
-        .condition = ok,
-        .then_body = ok_body,
-        .else_body = fail_body,
-    };
-    block_inst.body.instructions[0] = &condbr.base;
-
-    try parent_block.instructions.append(mod.gpa, &block_inst.base);
-}
-
-pub fn safetyPanic(mod: *Module, block: *Scope.Block, src: usize, panic_id: PanicId) !*Inst {
-    // TODO Once we have a panic function to call, call it here instead of breakpoint.
-    _ = try mod.addNoOp(block, src, Type.initTag(.void), .breakpoint);
-    return mod.addNoOp(block, src, Type.initTag(.noreturn), .unreach);
-}
-
-pub fn getTarget(self: Module) Target {
-    return self.comp.bin_file.options.target;
+pub fn getTarget(mod: Module) Target {
+    return mod.comp.bin_file.options.target;
 }
 
-pub fn optimizeMode(self: Module) std.builtin.Mode {
-    return self.comp.bin_file.options.optimize_mode;
+pub fn optimizeMode(mod: Module) std.builtin.Mode {
+    return mod.comp.bin_file.options.optimize_mode;
 }
 
-pub fn validateVarType(mod: *Module, scope: *Scope, src: usize, ty: Type) !void {
-    if (!ty.isValidVarType(false)) {
-        return mod.fail(scope, src, "variable of type '{}' must be const or comptime", .{ty});
-    }
-}
-
-/// Identifier token -> String (allocated in scope.arena())
+/// Given an identifier token, obtain the string for it.
+/// If the token uses @"" syntax, parses as a string, reports errors if applicable,
+/// and allocates the result within `scope.arena()`.
+/// Otherwise, returns a reference to the source code bytes directly.
+/// See also `appendIdentStr` and `parseStrLit`.
 pub fn identifierTokenString(mod: *Module, scope: *Scope, token: ast.TokenIndex) InnerError![]const u8 {
     const tree = scope.tree();
     const token_tags = tree.tokens.items(.tag);
     const token_starts = tree.tokens.items(.start);
     assert(token_tags[token] == .identifier);
-
     const ident_name = tree.tokenSlice(token);
-    if (mem.startsWith(u8, ident_name, "@")) {
-        const raw_string = ident_name[1..];
-        var bad_index: usize = undefined;
-        return std.zig.parseStringLiteral(scope.arena(), raw_string, &bad_index) catch |err| switch (err) {
-            error.InvalidCharacter => {
-                const bad_byte = raw_string[bad_index];
-                const src = token_starts[token];
-                return mod.fail(scope, src + 1 + bad_index, "invalid string literal character: '{c}'\n", .{bad_byte});
-            },
-            else => |e| return e,
-        };
+    if (!mem.startsWith(u8, ident_name, "@")) {
+        return ident_name;
     }
-    return ident_name;
+    var buf = std.ArrayList(u8).init(mod.gpa);
+    defer buf.deinit();
+    try parseStrLit(mod, scope, buf, ident_name, 1);
+    return buf.toOwnedSlice();
 }
 
-pub fn emitBackwardBranch(mod: *Module, block: *Scope.Block, src: usize) !void {
-    const shared = block.inlining.?.shared;
-    shared.branch_count += 1;
-    if (shared.branch_count > block.branch_quota.*) {
-        // TODO show the "called from here" stack
-        return mod.fail(&block.base, src, "evaluation exceeded {d} backwards branches", .{
-            block.branch_quota.*,
-        });
+/// Given an identifier token, obtain the string for it (possibly parsing as a string
+/// literal if it is @"" syntax), and append the string to `buf`.
+/// See also `identifierTokenString` and `parseStrLit`.
+pub fn appendIdentStr(
+    mod: *Module,
+    scope: *Scope,
+    token: ast.TokenIndex,
+    buf: *ArrayList(u8),
+) InnerError!void {
+    const tree = scope.tree();
+    const token_tags = tree.tokens.items(.tag);
+    const token_starts = tree.tokens.items(.start);
+    assert(token_tags[token] == .identifier);
+    const ident_name = tree.tokenSlice(token);
+    if (!mem.startsWith(u8, ident_name, "@")) {
+        return buf.appendSlice(ident_name);
+    } else {
+        return parseStrLit(scope, buf, ident_name, 1);
     }
 }
 
-pub fn namedFieldPtr(
+/// Appends the result to `buf`.
+pub fn parseStrLit(
     mod: *Module,
     scope: *Scope,
-    src: usize,
-    object_ptr: *Inst,
-    field_name: []const u8,
-    field_name_src: usize,
-) InnerError!*Inst {
-    const elem_ty = switch (object_ptr.ty.zigTypeTag()) {
-        .Pointer => object_ptr.ty.elemType(),
-        else => return mod.fail(scope, object_ptr.src, "expected pointer, found '{}'", .{object_ptr.ty}),
-    };
-    switch (elem_ty.zigTypeTag()) {
-        .Array => {
-            if (mem.eql(u8, field_name, "len")) {
-                return mod.constInst(scope, src, .{
-                    .ty = Type.initTag(.single_const_pointer_to_comptime_int),
-                    .val = try Value.Tag.ref_val.create(
-                        scope.arena(),
-                        try Value.Tag.int_u64.create(scope.arena(), elem_ty.arrayLen()),
-                    ),
-                });
-            } else {
-                return mod.fail(
-                    scope,
-                    field_name_src,
-                    "no member named '{s}' in '{}'",
-                    .{ field_name, elem_ty },
-                );
-            }
+    buf: *ArrayList(u8),
+    bytes: []const u8,
+    offset: usize,
+) InnerError!void {
+    const raw_string = bytes[offset..];
+    switch (try std.zig.string_literal.parseAppend(buf, raw_string)) {
+        .success => return,
+        .invalid_character => |bad_index| {
+            return mod.fail(
+                scope,
+                token_starts[token] + offset + bad_index,
+                "invalid string literal character: '{c}'",
+                .{raw_string[bad_index]},
+            );
         },
-        .Pointer => {
-            const ptr_child = elem_ty.elemType();
-            switch (ptr_child.zigTypeTag()) {
-                .Array => {
-                    if (mem.eql(u8, field_name, "len")) {
-                        return mod.constInst(scope, src, .{
-                            .ty = Type.initTag(.single_const_pointer_to_comptime_int),
-                            .val = try Value.Tag.ref_val.create(
-                                scope.arena(),
-                                try Value.Tag.int_u64.create(scope.arena(), ptr_child.arrayLen()),
-                            ),
-                        });
-                    } else {
-                        return mod.fail(
-                            scope,
-                            field_name_src,
-                            "no member named '{s}' in '{}'",
-                            .{ field_name, elem_ty },
-                        );
-                    }
-                },
-                else => {},
-            }
+        .expected_hex_digits => |bad_index| {
+            return mod.fail(
+                scope,
+                token_starts[token] + offset + bad_index,
+                "expected hex digits after '\\x'",
+                .{},
+            );
         },
-        .Type => {
-            _ = try mod.resolveConstValue(scope, object_ptr);
-            const result = try mod.analyzeDeref(scope, src, object_ptr, object_ptr.src);
-            const val = result.value().?;
-            const child_type = try val.toType(scope.arena());
-            switch (child_type.zigTypeTag()) {
-                .ErrorSet => {
-                    var name: []const u8 = undefined;
-                    // TODO resolve inferred error sets
-                    if (val.castTag(.error_set)) |payload|
-                        name = (payload.data.fields.getEntry(field_name) orelse return mod.fail(scope, src, "no error named '{s}' in '{}'", .{ field_name, child_type })).key
-                    else
-                        name = (try mod.getErrorValue(field_name)).key;
-
-                    const result_type = if (child_type.tag() == .anyerror)
-                        try Type.Tag.error_set_single.create(scope.arena(), name)
-                    else
-                        child_type;
-
-                    return mod.constInst(scope, src, .{
-                        .ty = try mod.simplePtrType(scope, src, result_type, false, .One),
-                        .val = try Value.Tag.ref_val.create(
-                            scope.arena(),
-                            try Value.Tag.@"error".create(scope.arena(), .{
-                                .name = name,
-                            }),
-                        ),
-                    });
-                },
-                .Struct => {
-                    const container_scope = child_type.getContainerScope();
-                    if (mod.lookupDeclName(&container_scope.base, field_name)) |decl| {
-                        // TODO if !decl.is_pub and inDifferentFiles() "{} is private"
-                        return mod.analyzeDeclRef(scope, src, decl);
-                    }
-
-                    if (container_scope.file_scope == mod.root_scope) {
-                        return mod.fail(scope, src, "root source file has no member called '{s}'", .{field_name});
-                    } else {
-                        return mod.fail(scope, src, "container '{}' has no member called '{s}'", .{ child_type, field_name });
-                    }
-                },
-                else => return mod.fail(scope, src, "type '{}' does not support field access", .{child_type}),
-            }
+        .invalid_hex_escape => |bad_index| {
+            return mod.fail(
+                scope,
+                token_starts[token] + offset + bad_index,
+                "invalid hex digit: '{c}'",
+                .{raw_string[bad_index]},
+            );
+        },
+        .invalid_unicode_escape => |bad_index| {
+            return mod.fail(
+                scope,
+                token_starts[token] + offset + bad_index,
+                "invalid unicode digit: '{c}'",
+                .{raw_string[bad_index]},
+            );
+        },
+        .missing_matching_brace => |bad_index| {
+            return mod.fail(
+                scope,
+                token_starts[token] + offset + bad_index,
+                "missing matching '}}' character",
+                .{},
+            );
+        },
+        .expected_unicode_digits => |bad_index| {
+            return mod.fail(
+                scope,
+                token_starts[token] + offset + bad_index,
+                "expected unicode digits after '\\u'",
+                .{},
+            );
         },
-        else => {},
-    }
-    return mod.fail(scope, src, "type '{}' does not support field access", .{elem_ty});
-}
-
-pub fn elemPtr(
-    mod: *Module,
-    scope: *Scope,
-    src: usize,
-    array_ptr: *Inst,
-    elem_index: *Inst,
-) InnerError!*Inst {
-    const elem_ty = switch (array_ptr.ty.zigTypeTag()) {
-        .Pointer => array_ptr.ty.elemType(),
-        else => return mod.fail(scope, array_ptr.src, "expected pointer, found '{}'", .{array_ptr.ty}),
-    };
-    if (!elem_ty.isIndexable()) {
-        return mod.fail(scope, src, "array access of non-array type '{}'", .{elem_ty});
-    }
-
-    if (elem_ty.isSinglePointer() and elem_ty.elemType().zigTypeTag() == .Array) {
-        // we have to deref the ptr operand to get the actual array pointer
-        const array_ptr_deref = try mod.analyzeDeref(scope, src, array_ptr, array_ptr.src);
-        if (array_ptr_deref.value()) |array_ptr_val| {
-            if (elem_index.value()) |index_val| {
-                // Both array pointer and index are compile-time known.
-                const index_u64 = index_val.toUnsignedInt();
-                // @intCast here because it would have been impossible to construct a value that
-                // required a larger index.
-                const elem_ptr = try array_ptr_val.elemPtr(scope.arena(), @intCast(usize, index_u64));
-                const pointee_type = elem_ty.elemType().elemType();
-
-                return mod.constInst(scope, src, .{
-                    .ty = try Type.Tag.single_const_pointer.create(scope.arena(), pointee_type),
-                    .val = elem_ptr,
-                });
-            }
-        }
     }
-
-    return mod.fail(scope, src, "TODO implement more analyze elemptr", .{});
 }
src/type.zig
@@ -863,7 +863,10 @@ pub const Type = extern union {
     }
 
     pub fn isNoReturn(self: Type) bool {
-        return self.zigTypeTag() == .NoReturn;
+        const definitely_correct_result = self.zigTypeTag() == .NoReturn;
+        const fast_result = self.tag_if_small_enough == Tag.noreturn;
+        assert(fast_result == definitely_correct_result);
+        return fast_result;
     }
 
     /// Asserts that hasCodeGenBits() is true.
@@ -3464,18 +3467,20 @@ pub const Type = extern union {
                 .int_unsigned,
                 => Payload.Bits,
 
+                .error_set,
+                .@"enum",
+                .@"struct",
+                .@"union",
+                => Payload.Decl,
+
                 .array => Payload.Array,
                 .array_sentinel => Payload.ArraySentinel,
                 .pointer => Payload.Pointer,
                 .function => Payload.Function,
                 .error_union => Payload.ErrorUnion,
-                .error_set => Payload.Decl,
                 .error_set_single => Payload.Name,
-                .empty_struct => Payload.ContainerScope,
-                .@"enum" => Payload.Enum,
-                .@"struct" => Payload.Struct,
-                .@"union" => Payload.Union,
                 .@"opaque" => Payload.Opaque,
+                .empty_struct => Payload.ContainerScope,
             };
         }
 
@@ -3598,13 +3603,8 @@ pub const Type = extern union {
 
         pub const Opaque = struct {
             base: Payload = .{ .tag = .@"opaque" },
-
-            scope: Module.Scope.Container,
+            data: Module.Scope.Container,
         };
-
-        pub const Enum = @import("type/Enum.zig");
-        pub const Struct = @import("type/Struct.zig");
-        pub const Union = @import("type/Union.zig");
     };
 };
 
src/value.zig
@@ -69,11 +69,12 @@ pub const Value = extern union {
         one,
         void_value,
         unreachable_value,
-        empty_struct_value,
-        empty_array,
         null_value,
         bool_true,
-        bool_false, // See last_no_payload_tag below.
+        bool_false,
+
+        empty_struct_value,
+        empty_array, // See last_no_payload_tag below.
         // After this, the tag requires a payload.
 
         ty,
@@ -107,7 +108,7 @@ pub const Value = extern union {
         /// to an inferred allocation. It does not support any of the normal value queries.
         inferred_alloc,
 
-        pub const last_no_payload_tag = Tag.bool_false;
+        pub const last_no_payload_tag = Tag.empty_array;
         pub const no_payload_count = @enumToInt(last_no_payload_tag) + 1;
 
         pub fn Type(comptime t: Tag) type {
src/zir.zig
@@ -10,17 +10,338 @@ const Type = @import("type.zig").Type;
 const Value = @import("value.zig").Value;
 const TypedValue = @import("TypedValue.zig");
 const ir = @import("ir.zig");
-const IrModule = @import("Module.zig");
+const Module = @import("Module.zig");
+const ast = std.zig.ast;
+
+/// The minimum amount of information needed to represent a list of ZIR instructions.
+/// Once this structure is completed, it can be used to generate TZIR, followed by
+/// machine code, without any memory access into the AST tree token list, node list,
+/// or source bytes. Exceptions include:
+///  * Compile errors, which may need to reach into these data structures to
+///    create a useful report.
+///  * In the future, possibly inline assembly, which needs to get parsed and
+///    handled by the codegen backend, and errors reported there. However for now,
+///    inline assembly is not an exception.
+pub const Code = struct {
+    instructions: std.MultiArrayList(Inst).Slice,
+    /// In order to store references to strings in fewer bytes, we copy all
+    /// string bytes into here. String bytes can be null. It is up to whomever
+    /// is referencing the data here whether they want to store both index and length,
+    /// thus allowing null bytes, or store only index, and use null-termination. The
+    /// `string_bytes` array is agnostic to either usage.
+    string_bytes: []u8,
+    /// The meaning of this data is determined by `Inst.Tag` value.
+    extra: []u32,
+    /// First ZIR instruction in this `Code`.
+    root_start: Inst.Index,
+    /// Number of ZIR instructions in the implicit root block of the `Code`.
+    root_len: u32,
+
+    /// Returns the requested data, as well as the new index which is at the start of the
+    /// trailers for the object.
+    pub fn extraData(code: Code, comptime T: type, index: usize) struct { data: T, end: usize } {
+        const fields = std.meta.fields(T);
+        var i: usize = index;
+        var result: T = undefined;
+        inline for (fields) |field| {
+            comptime assert(field.field_type == u32);
+            @field(result, field.name) = code.extra[i];
+            i += 1;
+        }
+        return .{
+            .data = result,
+            .end = i,
+        };
+    }
+
+    /// Given an index into `string_bytes` returns the null-terminated string found there.
+    pub fn nullTerminatedString(code: Code, index: usize) [:0]const u8 {
+        var end: usize = index;
+        while (code.string_bytes[end] != 0) {
+            end += 1;
+        }
+        return code.string_bytes[index..end :0];
+    }
+};
 
-/// These are instructions that correspond to the ZIR text format. See `ir.Inst` for
-/// in-memory, analyzed instructions with types and values.
-/// We use a table to map these instruction to their respective semantically analyzed
-/// instructions because it is possible to have multiple analyses on the same ZIR
-/// happening at the same time.
+/// These correspond to the first N tags of Value.
+/// A ZIR instruction refers to another one by index. However the first N indexes
+/// correspond to this enum, and the next M indexes correspond to the parameters
+/// of the current function. After that, they refer to other instructions in the
+/// instructions array for the function.
+/// When adding to this, consider adding a corresponding entry o `simple_types`
+/// in astgen.
+pub const Const = enum {
+    /// The 0 value is reserved so that ZIR instruction indexes can use it to
+    /// mean "null".
+    unused,
+
+    u8_type,
+    i8_type,
+    u16_type,
+    i16_type,
+    u32_type,
+    i32_type,
+    u64_type,
+    i64_type,
+    usize_type,
+    isize_type,
+    c_short_type,
+    c_ushort_type,
+    c_int_type,
+    c_uint_type,
+    c_long_type,
+    c_ulong_type,
+    c_longlong_type,
+    c_ulonglong_type,
+    c_longdouble_type,
+    f16_type,
+    f32_type,
+    f64_type,
+    f128_type,
+    c_void_type,
+    bool_type,
+    void_type,
+    type_type,
+    anyerror_type,
+    comptime_int_type,
+    comptime_float_type,
+    noreturn_type,
+    null_type,
+    undefined_type,
+    fn_noreturn_no_args_type,
+    fn_void_no_args_type,
+    fn_naked_noreturn_no_args_type,
+    fn_ccc_void_no_args_type,
+    single_const_pointer_to_comptime_int_type,
+    const_slice_u8_type,
+    enum_literal_type,
+    anyframe_type,
+
+    /// `undefined` (untyped)
+    undef,
+    /// `0` (comptime_int)
+    zero,
+    /// `1` (comptime_int)
+    one,
+    /// `{}`
+    void_value,
+    /// `unreachable` (noreturn type)
+    unreachable_value,
+    /// `null` (untyped)
+    null_value,
+    /// `true`
+    bool_true,
+    /// `false`
+    bool_false,
+};
+
+pub const const_inst_list = enumArray(Const, .{
+    .u8_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.u8_type),
+    }),
+    .i8_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.i8_type),
+    }),
+    .u16_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.u16_type),
+    }),
+    .i16_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.i16_type),
+    }),
+    .u32_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.u32_type),
+    }),
+    .i32_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.i32_type),
+    }),
+    .u64_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.u64_type),
+    }),
+    .i64_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.i64_type),
+    }),
+    .usize_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.usize_type),
+    }),
+    .isize_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.isize_type),
+    }),
+    .c_short_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.c_short_type),
+    }),
+    .c_ushort_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.c_ushort_type),
+    }),
+    .c_int_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.c_int_type),
+    }),
+    .c_uint_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.c_uint_type),
+    }),
+    .c_long_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.c_long_type),
+    }),
+    .c_ulong_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.c_ulong_type),
+    }),
+    .c_longlong_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.c_longlong_type),
+    }),
+    .c_ulonglong_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.c_ulonglong_type),
+    }),
+    .c_longdouble_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.c_longdouble_type),
+    }),
+    .f16_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.f16_type),
+    }),
+    .f32_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.f32_type),
+    }),
+    .f64_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.f64_type),
+    }),
+    .f128_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.f128_type),
+    }),
+    .c_void_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.c_void_type),
+    }),
+    .bool_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.bool_type),
+    }),
+    .void_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.void_type),
+    }),
+    .type_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.type_type),
+    }),
+    .anyerror_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.anyerror_type),
+    }),
+    .comptime_int_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.comptime_int_type),
+    }),
+    .comptime_float_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.comptime_float_type),
+    }),
+    .noreturn_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.noreturn_type),
+    }),
+    .null_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.null_type),
+    }),
+    .undefined_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.undefined_type),
+    }),
+    .fn_noreturn_no_args_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.fn_noreturn_no_args_type),
+    }),
+    .fn_void_no_args_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.fn_void_no_args_type),
+    }),
+    .fn_naked_noreturn_no_args_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.fn_naked_noreturn_no_args_type),
+    }),
+    .fn_ccc_void_no_args_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.fn_ccc_void_no_args_type),
+    }),
+    .single_const_pointer_to_comptime_int_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.single_const_pointer_to_comptime_int_type),
+    }),
+    .const_slice_u8_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.const_slice_u8_type),
+    }),
+    .enum_literal_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.enum_literal_type),
+    }),
+    .anyframe_type = @as(TypedValue, .{
+        .ty = Type.initTag(.type),
+        .val = Value.initTag(.anyframe_type),
+    }),
+
+    .undef = @as(TypedValue, .{
+        .ty = Type.initTag(.@"undefined"),
+        .val = Value.initTag(.undef),
+    }),
+    .zero = @as(TypedValue, .{
+        .ty = Type.initTag(.comptime_int),
+        .val = Value.initTag(.zero),
+    }),
+    .one = @as(TypedValue, .{
+        .ty = Type.initTag(.comptime_int),
+        .val = Value.initTag(.one),
+    }),
+    .void_value = @as(TypedValue, .{
+        .ty = Type.initTag(.void),
+        .val = Value.initTag(.void_value),
+    }),
+    .unreachable_value = @as(TypedValue, .{
+        .ty = Type.initTag(.noreturn),
+        .val = Value.initTag(.unreachable_value),
+    }),
+    .null_value = @as(TypedValue, .{
+        .ty = Type.initTag(.@"null"),
+        .val = Value.initTag(.null_value),
+    }),
+    .bool_true = @as(TypedValue, .{
+        .ty = Type.initTag(.bool),
+        .val = Value.initTag(.bool_true),
+    }),
+    .bool_false = @as(TypedValue, .{
+        .ty = Type.initTag(.bool),
+        .val = Value.initTag(.bool_false),
+    }),
+});
+
+/// These are untyped instructions generated from an Abstract Syntax Tree.
+/// The data here is immutable because it is possible to have multiple
+/// analyses on the same ZIR happening at the same time.
 pub const Inst = struct {
     tag: Tag,
-    /// Byte offset into the source.
-    src: usize,
+    data: Data,
 
     /// These names are used directly as the instruction names in the text format.
     pub const Tag = enum {
@@ -28,40 +349,45 @@ pub const Inst = struct {
         add,
         /// Twos complement wrapping integer addition.
         addwrap,
-        /// Allocates stack local memory. Its lifetime ends when the block ends that contains
-        /// this instruction. The operand is the type of the allocated object.
+        /// Allocates stack local memory.
+        /// Uses the `un_node` union field. The operand is the type of the allocated object.
+        /// The node source location points to a var decl node.
+        /// Indicates the beginning of a new statement in debug info.
         alloc,
         /// Same as `alloc` except mutable.
         alloc_mut,
         /// Same as `alloc` except the type is inferred.
+        /// lhs and rhs unused.
         alloc_inferred,
         /// Same as `alloc_inferred` except mutable.
+        /// lhs and rhs unused.
         alloc_inferred_mut,
         /// Create an `anyframe->T`.
+        /// Uses the `un_node` field. AST node is the `anyframe->T` syntax. Operand is the type.
         anyframe_type,
         /// Array concatenation. `a ++ b`
         array_cat,
         /// Array multiplication `a ** b`
         array_mul,
-        /// Create an array type
+        /// lhs is length, rhs is element type.
         array_type,
-        /// Create an array type with sentinel
+        /// lhs is length, ArrayTypeSentinel[rhs]
         array_type_sentinel,
         /// Given a pointer to an indexable object, returns the len property. This is
-        /// used by for loops. This instruction also emits a for-loop specific instruction
-        /// if the indexable object is not indexable.
+        /// used by for loops. This instruction also emits a for-loop specific compile
+        /// error if the indexable object is not indexable.
+        /// Uses the `un_node` field. The AST node is the for loop node.
         indexable_ptr_len,
-        /// Function parameter value. These must be first in a function's main block,
-        /// in respective order with the parameters.
-        /// TODO make this instruction implicit; after we transition to having ZIR
-        /// instructions be same sized and referenced by index, the first N indexes
-        /// will implicitly be references to the parameters of the function.
-        arg,
         /// Type coercion.
+        /// Uses the `bin` field.
         as,
-        /// Inline assembly.
+        /// Inline assembly. Non-volatile.
+        /// Uses the `pl_node` union field. Payload is `Asm`. AST node is the assembly node.
         @"asm",
-        /// Await an async function.
+        /// Inline assembly with the volatile attribute.
+        /// Uses the `pl_node` union field. Payload is `Asm`. AST node is the assembly node.
+        asm_volatile,
+        /// `await x` syntax. Uses the `un_node` union field.
         @"await",
         /// Bitwise AND. `&`
         bit_and,
@@ -80,6 +406,7 @@ pub const Inst = struct {
         /// Bitwise OR. `|`
         bit_or,
         /// A labeled block of code, which can return a value.
+        /// Uses the `pl_node` union field.
         block,
         /// A block of code, which can return a value. There are no instructions that break out of
         /// this block; it is implied that the final instruction is the result.
@@ -89,18 +416,36 @@ pub const Inst = struct {
         /// Same as `block_flat` but additionally makes the inner instructions execute at comptime.
         block_comptime_flat,
         /// Boolean AND. See also `bit_and`.
+        /// Uses the `bin` field.
         bool_and,
         /// Boolean NOT. See also `bit_not`.
+        /// Uses the `un_tok` field.
         bool_not,
         /// Boolean OR. See also `bit_or`.
+        /// Uses the `bin` field.
         bool_or,
-        /// Return a value from a `Block`.
+        /// Return a value from a block.
+        /// Uses the `bin` union field: `lhs` is `Ref` to the block, `rhs` is operand.
+        /// Uses the source information from previous instruction.
         @"break",
+        /// Same as `break` but has source information in the form of a token, and
+        /// the operand is assumed to be the void value.
+        /// Uses the `un_tok` union field.
+        break_void_tok,
+        /// lhs and rhs unused.
         breakpoint,
-        /// Same as `break` but without an operand; the operand is assumed to be the void value.
-        break_void,
-        /// Function call.
+        /// Function call with modifier `.auto`.
+        /// Uses `pl_node`. AST node is the function call. Payload is `Call`.
         call,
+        /// Same as `call` but with modifier `.async_kw`.
+        call_async_kw,
+        /// Same as `call` but with modifier `.no_async`.
+        call_no_async,
+        /// Same as `call` but with modifier `.compile_time`.
+        call_compile_time,
+        /// Function call with modifier `.auto`, empty parameter list.
+        /// Uses the `un_node` field. Operand is callee. AST node is the function call.
+        call_none,
         /// `<`
         cmp_lt,
         /// `<=`
@@ -118,95 +463,117 @@ pub const Inst = struct {
         /// LHS is destination element type, RHS is result pointer.
         coerce_result_ptr,
         /// Emit an error message and fail compilation.
+        /// Uses the `un_node` field.
         compile_error,
         /// Log compile time variables and emit an error message.
+        /// Uses the `pl_node` union field. The AST node is the compile log builtin call.
+        /// The payload is `MultiOp`.
         compile_log,
         /// Conditional branch. Splits control flow based on a boolean condition value.
         condbr,
         /// Special case, has no textual representation.
         @"const",
-        /// Container field with just the name.
-        container_field_named,
-        /// Container field with a type and a name,
-        container_field_typed,
-        /// Container field with all the bells and whistles.
-        container_field,
         /// Declares the beginning of a statement. Used for debug info.
-        dbg_stmt,
+        /// Uses the `node` union field.
+        dbg_stmt_node,
         /// Represents a pointer to a global decl.
+        /// Uses the `decl` union field.
         decl_ref,
-        /// Represents a pointer to a global decl by string name.
-        decl_ref_str,
         /// Equivalent to a decl_ref followed by deref.
+        /// Uses the `decl` union field.
         decl_val,
-        /// Load the value from a pointer.
-        deref,
+        /// Load the value from a pointer. Assumes `x.*` syntax.
+        /// Uses `un_node` field. AST node is the `x.*` syntax.
+        deref_node,
         /// Arithmetic division. Asserts no integer overflow.
         div,
         /// Given a pointer to an array, slice, or pointer, returns a pointer to the element at
-        /// the provided index.
+        /// the provided index. Uses the `bin` union field. Source location is implied
+        /// to be the same as the previous instruction.
         elem_ptr,
+        /// Same as `elem_ptr` except also stores a source location node.
+        /// Uses the `pl_node` union field. AST node is a[b] syntax. Payload is `Bin`.
+        elem_ptr_node,
         /// Given an array, slice, or pointer, returns the element at the provided index.
+        /// Uses the `bin` union field. Source location is implied to be the same
+        /// as the previous instruction.
         elem_val,
+        /// Same as `elem_val` except also stores a source location node.
+        /// Uses the `pl_node` union field. AST node is a[b] syntax. Payload is `Bin`.
+        elem_val_node,
         /// Emits a compile error if the operand is not `void`.
+        /// Uses the `un_node` field.
         ensure_result_used,
         /// Emits a compile error if an error is ignored.
+        /// Uses the `un_node` field.
         ensure_result_non_error,
         /// Create a `E!T` type.
         error_union_type,
-        /// Create an error set.
+        /// Create an error set. extra[lhs..rhs]. The values are token index offsets.
         error_set,
-        /// `error.Foo` syntax.
+        /// `error.Foo` syntax. uses the `tok` field of the Data union.
         error_value,
-        /// Export the provided Decl as the provided name in the compilation's output object file.
-        @"export",
         /// Given a pointer to a struct or object that contains virtual fields, returns a pointer
-        /// to the named field. The field name is a []const u8. Used by a.b syntax.
+        /// to the named field. The field name is stored in string_bytes. Used by a.b syntax.
+        /// Uses `pl_node` field. The AST node is the a.b syntax. Payload is Field.
         field_ptr,
         /// Given a struct or object that contains virtual fields, returns the named field.
-        /// The field name is a []const u8. Used by a.b syntax.
+        /// The field name is stored in string_bytes. Used by a.b syntax.
+        /// Uses `pl_node` field. The AST node is the a.b syntax. Payload is Field.
         field_val,
         /// Given a pointer to a struct or object that contains virtual fields, returns a pointer
         /// to the named field. The field name is a comptime instruction. Used by @field.
+        /// Uses `pl_node` field. The AST node is the builtin call. Payload is FieldNamed.
         field_ptr_named,
         /// Given a struct or object that contains virtual fields, returns the named field.
         /// The field name is a comptime instruction. Used by @field.
+        /// Uses `pl_node` field. The AST node is the builtin call. Payload is FieldNamed.
         field_val_named,
-        /// Convert a larger float type to any other float type, possibly causing a loss of precision.
+        /// Convert a larger float type to any other float type, possibly causing
+        /// a loss of precision.
         floatcast,
-        /// Declare a function body.
-        @"fn",
         /// Returns a function type, assuming unspecified calling convention.
+        /// Uses the `fn_type` union field. `payload_index` points to a `FnType`.
         fn_type,
         /// Same as `fn_type` but the function is variadic.
         fn_type_var_args,
         /// Returns a function type, with a calling convention instruction operand.
+        /// Uses the `fn_type` union field. `payload_index` points to a `FnTypeCc`.
         fn_type_cc,
         /// Same as `fn_type_cc` but the function is variadic.
         fn_type_cc_var_args,
-        /// @import(operand)
+        /// `@import(operand)`.
+        /// Uses the `un_node` field.
         import,
-        /// Integer literal.
+        /// Integer literal that fits in a u64. Uses the int union value.
         int,
         /// Convert an integer value to another integer type, asserting that the destination type
         /// can hold the same mathematical value.
         intcast,
         /// Make an integer type out of signedness and bit count.
+        /// lhs is signedness, rhs is bit count.
         int_type,
         /// Return a boolean false if an optional is null. `x != null`
+        /// Uses the `un_tok` field.
         is_non_null,
         /// Return a boolean true if an optional is null. `x == null`
+        /// Uses the `un_tok` field.
         is_null,
         /// Return a boolean false if an optional is null. `x.* != null`
+        /// Uses the `un_tok` field.
         is_non_null_ptr,
         /// Return a boolean true if an optional is null. `x.* == null`
+        /// Uses the `un_tok` field.
         is_null_ptr,
         /// Return a boolean true if value is an error
+        /// Uses the `un_tok` field.
         is_err,
         /// Return a boolean true if dereferenced pointer is an error
+        /// Uses the `un_tok` field.
         is_err_ptr,
         /// A labeled block of code that loops forever. At the end of the body it is implied
         /// to repeat; no explicit "repeat" instruction terminates loop bodies.
+        /// SubRange[lhs..rhs]
         loop,
         /// Merge two error sets into one, `E1 || E2`.
         merge_error_sets,
@@ -221,63 +588,70 @@ pub const Inst = struct {
         /// An await inside a nosuspend scope.
         nosuspend_await,
         /// Given a reference to a function and a parameter index, returns the
-        /// type of the parameter. TODO what happens when the parameter is `anytype`?
+        /// type of the parameter. The only usage of this instruction is for the
+        /// result location of parameters of function calls. In the case of a function's
+        /// parameter type being `anytype`, it is the type coercion's job to detect this
+        /// scenario and skip the coercion, so that semantic analysis of this instruction
+        /// is not in a position where it must create an invalid type.
+        /// Uses the `param_type` union field.
         param_type,
-        /// An alternative to using `const` for simple primitive values such as `true` or `u8`.
-        /// TODO flatten so that each primitive has its own ZIR Inst Tag.
-        primitive,
         /// Convert a pointer to a `usize` integer.
+        /// Uses the `un_node` field. The AST node is the builtin fn call node.
         ptrtoint,
         /// Turns an R-Value into a const L-Value. In other words, it takes a value,
         /// stores it in a memory location, and returns a const pointer to it. If the value
         /// is `comptime`, the memory location is global static constant data. Otherwise,
         /// the memory location is in the stack frame, local to the scope containing the
         /// instruction.
+        /// Uses the `un_tok` union field.
         ref,
         /// Resume an async function.
         @"resume",
         /// Obtains a pointer to the return value.
+        /// lhs and rhs unused.
         ret_ptr,
         /// Obtains the return type of the in-scope function.
+        /// lhs and rhs unused.
         ret_type,
-        /// Sends control flow back to the function's callee. Takes an operand as the return value.
-        @"return",
-        /// Same as `return` but there is no operand; the operand is implicitly the void value.
-        return_void,
+        /// Sends control flow back to the function's callee.
+        /// Includes an operand as the return value.
+        /// Includes an AST node source location.
+        /// Uses the `un_node` union field.
+        ret_node,
+        /// Sends control flow back to the function's callee.
+        /// Includes an operand as the return value.
+        /// Includes a token source location.
+        /// Uses the un_tok union field.
+        ret_tok,
         /// Changes the maximum number of backwards branches that compile-time
         /// code execution can use before giving up and making a compile error.
+        /// Uses the `un_node` union field.
         set_eval_branch_quota,
         /// Integer shift-left. Zeroes are shifted in from the right hand side.
         shl,
         /// Integer shift-right. Arithmetic or logical depending on the signedness of the integer type.
         shr,
-        /// Create a const pointer type with element type T. `*const T`
-        single_const_ptr_type,
-        /// Create a mutable pointer type with element type T. `*T`
-        single_mut_ptr_type,
-        /// Create a const pointer type with element type T. `[*]const T`
-        many_const_ptr_type,
-        /// Create a mutable pointer type with element type T. `[*]T`
-        many_mut_ptr_type,
-        /// Create a const pointer type with element type T. `[*c]const T`
-        c_const_ptr_type,
-        /// Create a mutable pointer type with element type T. `[*c]T`
-        c_mut_ptr_type,
-        /// Create a mutable slice type with element type T. `[]T`
-        mut_slice_type,
-        /// Create a const slice type with element type T. `[]T`
-        const_slice_type,
-        /// Create a pointer type with attributes
+        /// Create a pointer type that does not have a sentinel, alignment, or bit range specified.
+        /// Uses the `ptr_type_simple` union field.
+        ptr_type_simple,
+        /// Create a pointer type which can have a sentinel, alignment, and/or bit range.
+        /// Uses the `ptr_type` union field.
         ptr_type,
         /// Each `store_to_inferred_ptr` puts the type of the stored value into a set,
         /// and then `resolve_inferred_alloc` triggers peer type resolution on the set.
         /// The operand is a `alloc_inferred` or `alloc_inferred_mut` instruction, which
         /// is the allocation that needs to have its type inferred.
+        /// Uses the `un_node` field. The AST node is the var decl.
         resolve_inferred_alloc,
-        /// Slice operation `array_ptr[start..end:sentinel]`
-        slice,
-        /// Slice operation with just start `lhs[rhs..]`
+        /// Slice operation `lhs[rhs..]`. No sentinel and no end offset.
+        /// Uses the `pl_node` field. AST node is the slice syntax. Payload is `SliceStart`.
         slice_start,
+        /// Slice operation `array_ptr[start..end]`. No sentinel.
+        /// Uses the `pl_node` field. AST node is the slice syntax. Payload is `SliceEnd`.
+        slice_end,
+        /// Slice operation `array_ptr[start..end:sentinel]`.
+        /// Uses the `pl_node` field. AST node is the slice syntax. Payload is `SliceSentinel`.
+        slice_sentinel,
         /// Write a value to a pointer. For loading, see `deref`.
         store,
         /// Same as `store` but the type of the value being stored will be used to infer
@@ -287,242 +661,130 @@ pub const Inst = struct {
         /// the pointer type.
         store_to_inferred_ptr,
         /// String Literal. Makes an anonymous Decl and then takes a pointer to it.
+        /// Uses the `str` union field.
         str,
-        /// Create a struct type.
-        struct_type,
         /// Arithmetic subtraction. Asserts no integer overflow.
         sub,
         /// Twos complement wrapping integer subtraction.
         subwrap,
         /// Returns the type of a value.
+        /// Uses the `un_tok` field.
         typeof,
-        /// Is the builtin @TypeOf which returns the type after peertype resolution of one or more params
+        /// The builtin `@TypeOf` which returns the type after Peer Type Resolution
+        /// of one or more params.
+        /// Uses the `pl_node` field. AST node is the `@TypeOf` call. Payload is `MultiOp`.
         typeof_peer,
         /// Asserts control-flow will not reach this instruction. Not safety checked - the compiler
         /// will assume the correctness of this instruction.
+        /// lhs and rhs unused.
         unreachable_unsafe,
         /// Asserts control-flow will not reach this instruction. In safety-checked modes,
         /// this will generate a call to the panic function unless it can be proven unreachable
         /// by the compiler.
+        /// lhs and rhs unused.
         unreachable_safe,
         /// Bitwise XOR. `^`
         xor,
         /// Create an optional type '?T'
+        /// Uses the `un_tok` field.
         optional_type,
         /// Create an optional type '?T'. The operand is a pointer value. The optional type will
         /// be the type of the pointer element, wrapped in an optional.
+        /// Uses the `un_tok` field.
         optional_type_from_ptr_elem,
-        /// Create a union type.
-        union_type,
         /// ?T => T with safety.
         /// Given an optional value, returns the payload value, with a safety check that
         /// the value is non-null. Used for `orelse`, `if` and `while`.
+        /// Uses the `un_tok` field.
         optional_payload_safe,
         /// ?T => T without safety.
         /// Given an optional value, returns the payload value. No safety checks.
+        /// Uses the `un_tok` field.
         optional_payload_unsafe,
         /// *?T => *T with safety.
         /// Given a pointer to an optional value, returns a pointer to the payload value,
         /// with a safety check that the value is non-null. Used for `orelse`, `if` and `while`.
+        /// Uses the `un_tok` field.
         optional_payload_safe_ptr,
         /// *?T => *T without safety.
         /// Given a pointer to an optional value, returns a pointer to the payload value.
         /// No safety checks.
+        /// Uses the `un_tok` field.
         optional_payload_unsafe_ptr,
         /// E!T => T with safety.
         /// Given an error union value, returns the payload value, with a safety check
         /// that the value is not an error. Used for catch, if, and while.
+        /// Uses the `un_tok` field.
         err_union_payload_safe,
         /// E!T => T without safety.
         /// Given an error union value, returns the payload value. No safety checks.
+        /// Uses the `un_tok` field.
         err_union_payload_unsafe,
         /// *E!T => *T with safety.
         /// Given a pointer to an error union value, returns a pointer to the payload value,
         /// with a safety check that the value is not an error. Used for catch, if, and while.
+        /// Uses the `un_tok` field.
         err_union_payload_safe_ptr,
         /// *E!T => *T without safety.
         /// Given a pointer to a error union value, returns a pointer to the payload value.
         /// No safety checks.
+        /// Uses the `un_tok` field.
         err_union_payload_unsafe_ptr,
         /// E!T => E without safety.
         /// Given an error union value, returns the error code. No safety checks.
+        /// Uses the `un_tok` field.
         err_union_code,
         /// *E!T => E without safety.
         /// Given a pointer to an error union value, returns the error code. No safety checks.
+        /// Uses the `un_tok` field.
         err_union_code_ptr,
         /// Takes a *E!T and raises a compiler error if T != void
+        /// Uses the `un_tok` field.
         ensure_err_payload_void,
-        /// Create a enum literal,
+        /// An enum literal. Uses the `str` union field.
         enum_literal,
-        /// Create an enum type.
-        enum_type,
-        /// Does nothing; returns a void value.
-        void_value,
-        /// Suspend an async function.
-        @"suspend",
-        /// Suspend an async function.
-        /// Same as .suspend but with a block.
+        /// Suspend an async function. The suspend block has 0 or 1 statements in it.
+        /// Uses the `un_node` union field.
+        suspend_block_one,
+        /// Suspend an async function. The suspend block has any number of statements in it.
+        /// Uses the `block` union field.
         suspend_block,
         /// A switch expression.
-        switchbr,
-        /// Same as `switchbr` but the target is a pointer to the value being switched on.
-        switchbr_ref,
+        /// lhs is target, SwitchBr[rhs]
+        /// All prongs of target handled.
+        switch_br,
+        /// Same as switch_br, except has a range field.
+        switch_br_range,
+        /// Same as switch_br, except has an else prong.
+        switch_br_else,
+        /// Same as switch_br_else, except has a range field.
+        switch_br_else_range,
+        /// Same as switch_br, except has an underscore prong.
+        switch_br_underscore,
+        /// Same as switch_br, except has a range field.
+        switch_br_underscore_range,
+        /// Same as `switch_br` but the target is a pointer to the value being switched on.
+        switch_br_ref,
+        /// Same as `switch_br_range` but the target is a pointer to the value being switched on.
+        switch_br_ref_range,
+        /// Same as `switch_br_else` but the target is a pointer to the value being switched on.
+        switch_br_ref_else,
+        /// Same as `switch_br_else_range` but the target is a pointer to the
+        /// value being switched on.
+        switch_br_ref_else_range,
+        /// Same as `switch_br_underscore` but the target is a pointer to the value
+        /// being switched on.
+        switch_br_ref_underscore,
+        /// Same as `switch_br_underscore_range` but the target is a pointer to
+        /// the value being switched on.
+        switch_br_ref_underscore_range,
         /// A range in a switch case, `lhs...rhs`.
         /// Only checks that `lhs >= rhs` if they are ints, everything else is
-        /// validated by the .switch instruction.
+        /// validated by the switch_br instruction.
         switch_range,
 
-        pub fn Type(tag: Tag) type {
-            return switch (tag) {
-                .alloc_inferred,
-                .alloc_inferred_mut,
-                .breakpoint,
-                .dbg_stmt,
-                .return_void,
-                .ret_ptr,
-                .ret_type,
-                .unreachable_unsafe,
-                .unreachable_safe,
-                .void_value,
-                .@"suspend",
-                => NoOp,
-
-                .alloc,
-                .alloc_mut,
-                .bool_not,
-                .compile_error,
-                .deref,
-                .@"return",
-                .is_null,
-                .is_non_null,
-                .is_null_ptr,
-                .is_non_null_ptr,
-                .is_err,
-                .is_err_ptr,
-                .ptrtoint,
-                .ensure_result_used,
-                .ensure_result_non_error,
-                .bitcast_result_ptr,
-                .ref,
-                .bitcast_ref,
-                .typeof,
-                .resolve_inferred_alloc,
-                .single_const_ptr_type,
-                .single_mut_ptr_type,
-                .many_const_ptr_type,
-                .many_mut_ptr_type,
-                .c_const_ptr_type,
-                .c_mut_ptr_type,
-                .mut_slice_type,
-                .const_slice_type,
-                .optional_type,
-                .optional_type_from_ptr_elem,
-                .optional_payload_safe,
-                .optional_payload_unsafe,
-                .optional_payload_safe_ptr,
-                .optional_payload_unsafe_ptr,
-                .err_union_payload_safe,
-                .err_union_payload_unsafe,
-                .err_union_payload_safe_ptr,
-                .err_union_payload_unsafe_ptr,
-                .err_union_code,
-                .err_union_code_ptr,
-                .ensure_err_payload_void,
-                .anyframe_type,
-                .bit_not,
-                .import,
-                .set_eval_branch_quota,
-                .indexable_ptr_len,
-                .@"resume",
-                .@"await",
-                .nosuspend_await,
-                => UnOp,
-
-                .add,
-                .addwrap,
-                .array_cat,
-                .array_mul,
-                .array_type,
-                .bit_and,
-                .bit_or,
-                .bool_and,
-                .bool_or,
-                .div,
-                .mod_rem,
-                .mul,
-                .mulwrap,
-                .shl,
-                .shr,
-                .store,
-                .store_to_block_ptr,
-                .store_to_inferred_ptr,
-                .sub,
-                .subwrap,
-                .cmp_lt,
-                .cmp_lte,
-                .cmp_eq,
-                .cmp_gte,
-                .cmp_gt,
-                .cmp_neq,
-                .as,
-                .floatcast,
-                .intcast,
-                .bitcast,
-                .coerce_result_ptr,
-                .xor,
-                .error_union_type,
-                .merge_error_sets,
-                .slice_start,
-                .switch_range,
-                => BinOp,
-
-                .block,
-                .block_flat,
-                .block_comptime,
-                .block_comptime_flat,
-                .suspend_block,
-                => Block,
-
-                .switchbr, .switchbr_ref => SwitchBr,
-
-                .arg => Arg,
-                .array_type_sentinel => ArrayTypeSentinel,
-                .@"break" => Break,
-                .break_void => BreakVoid,
-                .call => Call,
-                .decl_ref => DeclRef,
-                .decl_ref_str => DeclRefStr,
-                .decl_val => DeclVal,
-                .compile_log => CompileLog,
-                .loop => Loop,
-                .@"const" => Const,
-                .str => Str,
-                .int => Int,
-                .int_type => IntType,
-                .field_ptr, .field_val => Field,
-                .field_ptr_named, .field_val_named => FieldNamed,
-                .@"asm" => Asm,
-                .@"fn" => Fn,
-                .@"export" => Export,
-                .param_type => ParamType,
-                .primitive => Primitive,
-                .fn_type, .fn_type_var_args => FnType,
-                .fn_type_cc, .fn_type_cc_var_args => FnTypeCc,
-                .elem_ptr, .elem_val => Elem,
-                .condbr => CondBr,
-                .ptr_type => PtrType,
-                .enum_literal => EnumLiteral,
-                .error_set => ErrorSet,
-                .error_value => ErrorValue,
-                .slice => Slice,
-                .typeof_peer => TypeOfPeer,
-                .container_field_named => ContainerFieldNamed,
-                .container_field_typed => ContainerFieldTyped,
-                .container_field => ContainerField,
-                .enum_type => EnumType,
-                .union_type => UnionType,
-                .struct_type => StructType,
-            };
+        comptime {
+            assert(@sizeOf(Tag) == 1);
         }
 
         /// Returns whether the instruction is one of the control flow "noreturn" types.
@@ -540,7 +802,6 @@ pub const Inst = struct {
                 .array_type,
                 .array_type_sentinel,
                 .indexable_ptr_len,
-                .arg,
                 .as,
                 .@"asm",
                 .bit_and,
@@ -557,6 +818,13 @@ pub const Inst = struct {
                 .bool_or,
                 .breakpoint,
                 .call,
+                .call_async_kw,
+                .call_never_tail,
+                .call_never_inline,
+                .call_no_async,
+                .call_always_tail,
+                .call_always_inline,
+                .call_compile_time,
                 .cmp_lt,
                 .cmp_lte,
                 .cmp_eq,
@@ -567,21 +835,18 @@ pub const Inst = struct {
                 .@"const",
                 .dbg_stmt,
                 .decl_ref,
-                .decl_ref_str,
                 .decl_val,
-                .deref,
+                .deref_node,
                 .div,
                 .elem_ptr,
                 .elem_val,
                 .ensure_result_used,
                 .ensure_result_non_error,
-                .@"export",
                 .floatcast,
                 .field_ptr,
                 .field_val,
                 .field_ptr_named,
                 .field_val_named,
-                .@"fn",
                 .fn_type,
                 .fn_type_var_args,
                 .fn_type_cc,
@@ -599,7 +864,6 @@ pub const Inst = struct {
                 .mul,
                 .mulwrap,
                 .param_type,
-                .primitive,
                 .ptrtoint,
                 .ref,
                 .ret_ptr,
@@ -635,6 +899,7 @@ pub const Inst = struct {
                 .err_union_code,
                 .err_union_code_ptr,
                 .ptr_type,
+                .ptr_type_simple,
                 .ensure_err_payload_void,
                 .enum_literal,
                 .merge_error_sets,
@@ -650,9 +915,6 @@ pub const Inst = struct {
                 .resolve_inferred_alloc,
                 .set_eval_branch_quota,
                 .compile_log,
-                .enum_type,
-                .union_type,
-                .struct_type,
                 .void_value,
                 .switch_range,
                 .@"resume",
@@ -661,19 +923,19 @@ pub const Inst = struct {
                 => false,
 
                 .@"break",
-                .break_void,
+                .break_void_tok,
                 .condbr,
                 .compile_error,
-                .@"return",
-                .return_void,
+                .ret_node,
+                .ret_tok,
                 .unreachable_unsafe,
                 .unreachable_safe,
                 .loop,
                 .container_field_named,
                 .container_field_typed,
                 .container_field,
-                .switchbr,
-                .switchbr_ref,
+                .switch_br,
+                .switch_br_ref,
                 .@"suspend",
                 .suspend_block,
                 => true,
@@ -681,1346 +943,244 @@ pub const Inst = struct {
         }
     };
 
-    /// Prefer `castTag` to this.
-    pub fn cast(base: *Inst, comptime T: type) ?*T {
-        if (@hasField(T, "base_tag")) {
-            return base.castTag(T.base_tag);
-        }
-        inline for (@typeInfo(Tag).Enum.fields) |field| {
-            const tag = @intToEnum(Tag, field.value);
-            if (base.tag == tag) {
-                if (T == tag.Type()) {
-                    return @fieldParentPtr(T, "base", base);
-                }
-                return null;
+    /// The position of a ZIR instruction within the `Code` instructions array.
+    pub const Index = u32;
+
+    /// A reference to another ZIR instruction. If this value is below a certain
+    /// threshold, it implicitly refers to a constant-known value from the `Const` enum.
+    /// Below a second threshold, it implicitly refers to a parameter of the current
+    /// function.
+    /// Finally, after subtracting that offset, it refers to another instruction in
+    /// the instruction array.
+    /// This logic is implemented in `Sema.resolveRef`.
+    pub const Ref = u32;
+
+    /// For instructions whose payload fits into 8 bytes, this is used.
+    /// When an instruction's payload does not fit, bin_op is used, and
+    /// lhs and rhs refer to `Tag`-specific values, with one of the operands
+    /// used to index into a separate array specific to that instruction.
+    pub const Data = union {
+        /// Used for unary operators, with an AST node source location.
+        un_node: struct {
+            /// Offset from Decl AST node index.
+            src_node: ast.Node.Index,
+            /// The meaning of this operand depends on the corresponding `Tag`.
+            operand: Ref,
+
+            fn src(self: @This()) LazySrcLoc {
+                return .{ .node_offset = self.src_node };
             }
-        }
-        unreachable;
-    }
-
-    pub fn castTag(base: *Inst, comptime tag: Tag) ?*tag.Type() {
-        if (base.tag == tag) {
-            return @fieldParentPtr(tag.Type(), "base", base);
-        }
-        return null;
-    }
-
-    pub const NoOp = struct {
-        base: Inst,
-
-        positionals: struct {},
-        kw_args: struct {},
-    };
-
-    pub const UnOp = struct {
-        base: Inst,
-
-        positionals: struct {
-            operand: *Inst,
-        },
-        kw_args: struct {},
-    };
-
-    pub const BinOp = struct {
-        base: Inst,
-
-        positionals: struct {
-            lhs: *Inst,
-            rhs: *Inst,
-        },
-        kw_args: struct {},
-    };
-
-    pub const Arg = struct {
-        pub const base_tag = Tag.arg;
-        base: Inst,
-
-        positionals: struct {
-            /// This exists to be passed to the arg TZIR instruction, which
-            /// needs it for debug info.
-            name: []const u8,
-        },
-        kw_args: struct {},
-    };
-
-    pub const Block = struct {
-        pub const base_tag = Tag.block;
-        base: Inst,
-
-        positionals: struct {
-            body: Body,
-        },
-        kw_args: struct {},
-    };
-
-    pub const Break = struct {
-        pub const base_tag = Tag.@"break";
-        base: Inst,
-
-        positionals: struct {
-            block: *Block,
-            operand: *Inst,
-        },
-        kw_args: struct {},
-    };
-
-    pub const BreakVoid = struct {
-        pub const base_tag = Tag.break_void;
-        base: Inst,
-
-        positionals: struct {
-            block: *Block,
-        },
-        kw_args: struct {},
-    };
-
-    // TODO break this into multiple call instructions to avoid paying the cost
-    // of the calling convention field most of the time.
-    pub const Call = struct {
-        pub const base_tag = Tag.call;
-        base: Inst,
-
-        positionals: struct {
-            func: *Inst,
-            args: []*Inst,
-            modifier: std.builtin.CallOptions.Modifier = .auto,
         },
-        kw_args: struct {},
-    };
-
-    pub const DeclRef = struct {
-        pub const base_tag = Tag.decl_ref;
-        base: Inst,
-
-        positionals: struct {
-            decl: *IrModule.Decl,
+        /// Used for unary operators, with a token source location.
+        un_tok: struct {
+            /// Offset from Decl AST token index.
+            src_tok: ast.TokenIndex,
+            /// The meaning of this operand depends on the corresponding `Tag`.
+            operand: Ref,
+
+            fn src(self: @This()) LazySrcLoc {
+                return .{ .token_offset = self.src_tok };
+            }
         },
-        kw_args: struct {},
-    };
-
-    pub const DeclRefStr = struct {
-        pub const base_tag = Tag.decl_ref_str;
-        base: Inst,
-
-        positionals: struct {
-            name: *Inst,
+        pl_node: struct {
+            /// Offset from Decl AST node index.
+            /// `Tag` determines which kind of AST node this points to.
+            src_node: ast.Node.Index,
+            /// index into extra.
+            /// `Tag` determines what lives there.
+            payload_index: u32,
+
+            fn src(self: @This()) LazySrcLoc {
+                return .{ .node_offset = self.src_node };
+            }
         },
-        kw_args: struct {},
-    };
-
-    pub const DeclVal = struct {
-        pub const base_tag = Tag.decl_val;
-        base: Inst,
-
-        positionals: struct {
-            decl: *IrModule.Decl,
+        bin: Bin,
+        decl: *Module.Decl,
+        @"const": *TypedValue,
+        str: struct {
+            /// Offset into `string_bytes`.
+            start: u32,
+            /// Number of bytes in the string.
+            len: u32,
+
+            pub fn get(self: @This(), code: Code) []const u8 {
+                return code.string_bytes[self.start..][0..self.len];
+            }
         },
-        kw_args: struct {},
-    };
-
-    pub const CompileLog = struct {
-        pub const base_tag = Tag.compile_log;
-        base: Inst,
-
-        positionals: struct {
-            to_log: []*Inst,
+        /// Offset from Decl AST token index.
+        tok: ast.TokenIndex,
+        /// Offset from Decl AST node index.
+        node: ast.Node.Index,
+        int: u64,
+        condbr: struct {
+            condition: Ref,
+            /// index into extra.
+            payload_index: u32,
         },
-        kw_args: struct {},
-    };
-
-    pub const Const = struct {
-        pub const base_tag = Tag.@"const";
-        base: Inst,
-
-        positionals: struct {
-            typed_value: TypedValue,
+        ptr_type_simple: struct {
+            is_allowzero: bool,
+            is_mutable: bool,
+            is_volatile: bool,
+            size: std.builtin.TypeInfo.Pointer.Size,
+            elem_type: Ref,
         },
-        kw_args: struct {},
-    };
-
-    pub const Str = struct {
-        pub const base_tag = Tag.str;
-        base: Inst,
-
-        positionals: struct {
-            bytes: []const u8,
+        ptr_type: struct {
+            flags: packed struct {
+                is_allowzero: bool,
+                is_mutable: bool,
+                is_volatile: bool,
+                has_sentinel: bool,
+                has_align: bool,
+                has_bit_start: bool,
+                has_bit_end: bool,
+                _: u1 = undefined,
+            },
+            size: std.builtin.TypeInfo.Pointer.Size,
+            /// Index into extra. See `PtrType`.
+            payload_index: u32,
         },
-        kw_args: struct {},
-    };
-
-    pub const Int = struct {
-        pub const base_tag = Tag.int;
-        base: Inst,
-
-        positionals: struct {
-            int: BigIntConst,
+        fn_type: struct {
+            return_type: Ref,
+            /// For `fn_type` this points to a `FnType` in `extra`.
+            /// For `fn_type_cc` this points to `FnTypeCc` in `extra`.
+            payload_index: u32,
         },
-        kw_args: struct {},
-    };
-
-    pub const Loop = struct {
-        pub const base_tag = Tag.loop;
-        base: Inst,
-
-        positionals: struct {
-            body: Body,
+        param_type: struct {
+            callee: Ref,
+            param_index: u32,
         },
-        kw_args: struct {},
-    };
 
-    pub const Field = struct {
-        base: Inst,
-
-        positionals: struct {
-            object: *Inst,
-            field_name: []const u8,
-        },
-        kw_args: struct {},
-    };
-
-    pub const FieldNamed = struct {
-        base: Inst,
-
-        positionals: struct {
-            object: *Inst,
-            field_name: *Inst,
-        },
-        kw_args: struct {},
+        // Make sure we don't accidentally add a field to make this union
+        // bigger than expected. Note that in Debug builds, Zig is allowed
+        // to insert a secret field for safety checks.
+        comptime {
+            if (std.builtin.mode != .Debug) {
+                assert(@sizeOf(Data) == 8);
+            }
+        }
     };
 
+    /// Stored in extra. Trailing is:
+    /// * output_name: u32 // index into string_bytes (null terminated) if output is present
+    /// * arg: Ref // for every args_len.
+    /// * arg_name: u32 // index into string_bytes (null terminated) for every args_len.
+    /// * clobber: u32 // index into string_bytes (null terminated) for every clobbers_len.
     pub const Asm = struct {
-        pub const base_tag = Tag.@"asm";
-        base: Inst,
-
-        positionals: struct {
-            asm_source: *Inst,
-            return_type: *Inst,
-        },
-        kw_args: struct {
-            @"volatile": bool = false,
-            output: ?*Inst = null,
-            inputs: []const []const u8 = &.{},
-            clobbers: []const []const u8 = &.{},
-            args: []*Inst = &[0]*Inst{},
-        },
-    };
-
-    pub const Fn = struct {
-        pub const base_tag = Tag.@"fn";
-        base: Inst,
-
-        positionals: struct {
-            fn_type: *Inst,
-            body: Body,
-        },
-        kw_args: struct {},
-    };
-
-    pub const FnType = struct {
-        pub const base_tag = Tag.fn_type;
-        base: Inst,
-
-        positionals: struct {
-            param_types: []*Inst,
-            return_type: *Inst,
-        },
-        kw_args: struct {},
+        asm_source: Ref,
+        return_type: Ref,
+        /// May be omitted.
+        output: Ref,
+        args_len: u32,
+        clobbers_len: u32,
     };
 
+    /// This data is stored inside extra, with trailing parameter type indexes
+    /// according to `param_types_len`.
+    /// Each param type is a `Ref`.
     pub const FnTypeCc = struct {
-        pub const base_tag = Tag.fn_type_cc;
-        base: Inst,
-
-        positionals: struct {
-            param_types: []*Inst,
-            return_type: *Inst,
-            cc: *Inst,
-        },
-        kw_args: struct {},
+        cc: Ref,
+        param_types_len: u32,
     };
 
-    pub const IntType = struct {
-        pub const base_tag = Tag.int_type;
-        base: Inst,
-
-        positionals: struct {
-            signed: *Inst,
-            bits: *Inst,
-        },
-        kw_args: struct {},
-    };
-
-    pub const Export = struct {
-        pub const base_tag = Tag.@"export";
-        base: Inst,
-
-        positionals: struct {
-            symbol_name: *Inst,
-            decl_name: []const u8,
-        },
-        kw_args: struct {},
+    /// This data is stored inside extra, with trailing parameter type indexes
+    /// according to `param_types_len`.
+    /// Each param type is a `Ref`.
+    pub const FnType = struct {
+        param_types_len: u32,
     };
 
-    pub const ParamType = struct {
-        pub const base_tag = Tag.param_type;
-        base: Inst,
-
-        positionals: struct {
-            func: *Inst,
-            arg_index: usize,
-        },
-        kw_args: struct {},
+    /// This data is stored inside extra, with trailing operands according to `operands_len`.
+    /// Each operand is a `Ref`.
+    pub const MultiOp = struct {
+        operands_len: u32,
     };
 
-    pub const Primitive = struct {
-        pub const base_tag = Tag.primitive;
-        base: Inst,
-
-        positionals: struct {
-            tag: Builtin,
-        },
-        kw_args: struct {},
-
-        pub const Builtin = enum {
-            i8,
-            u8,
-            i16,
-            u16,
-            i32,
-            u32,
-            i64,
-            u64,
-            isize,
-            usize,
-            c_short,
-            c_ushort,
-            c_int,
-            c_uint,
-            c_long,
-            c_ulong,
-            c_longlong,
-            c_ulonglong,
-            c_longdouble,
-            c_void,
-            f16,
-            f32,
-            f64,
-            f128,
-            bool,
-            void,
-            noreturn,
-            type,
-            anyerror,
-            comptime_int,
-            comptime_float,
-            @"true",
-            @"false",
-            @"null",
-            @"undefined",
-            void_value,
-
-            pub fn toTypedValue(self: Builtin) TypedValue {
-                return switch (self) {
-                    .i8 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.i8_type) },
-                    .u8 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.u8_type) },
-                    .i16 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.i16_type) },
-                    .u16 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.u16_type) },
-                    .i32 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.i32_type) },
-                    .u32 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.u32_type) },
-                    .i64 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.i64_type) },
-                    .u64 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.u64_type) },
-                    .isize => .{ .ty = Type.initTag(.type), .val = Value.initTag(.isize_type) },
-                    .usize => .{ .ty = Type.initTag(.type), .val = Value.initTag(.usize_type) },
-                    .c_short => .{ .ty = Type.initTag(.type), .val = Value.initTag(.c_short_type) },
-                    .c_ushort => .{ .ty = Type.initTag(.type), .val = Value.initTag(.c_ushort_type) },
-                    .c_int => .{ .ty = Type.initTag(.type), .val = Value.initTag(.c_int_type) },
-                    .c_uint => .{ .ty = Type.initTag(.type), .val = Value.initTag(.c_uint_type) },
-                    .c_long => .{ .ty = Type.initTag(.type), .val = Value.initTag(.c_long_type) },
-                    .c_ulong => .{ .ty = Type.initTag(.type), .val = Value.initTag(.c_ulong_type) },
-                    .c_longlong => .{ .ty = Type.initTag(.type), .val = Value.initTag(.c_longlong_type) },
-                    .c_ulonglong => .{ .ty = Type.initTag(.type), .val = Value.initTag(.c_ulonglong_type) },
-                    .c_longdouble => .{ .ty = Type.initTag(.type), .val = Value.initTag(.c_longdouble_type) },
-                    .c_void => .{ .ty = Type.initTag(.type), .val = Value.initTag(.c_void_type) },
-                    .f16 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.f16_type) },
-                    .f32 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.f32_type) },
-                    .f64 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.f64_type) },
-                    .f128 => .{ .ty = Type.initTag(.type), .val = Value.initTag(.f128_type) },
-                    .bool => .{ .ty = Type.initTag(.type), .val = Value.initTag(.bool_type) },
-                    .void => .{ .ty = Type.initTag(.type), .val = Value.initTag(.void_type) },
-                    .noreturn => .{ .ty = Type.initTag(.type), .val = Value.initTag(.noreturn_type) },
-                    .type => .{ .ty = Type.initTag(.type), .val = Value.initTag(.type_type) },
-                    .anyerror => .{ .ty = Type.initTag(.type), .val = Value.initTag(.anyerror_type) },
-                    .comptime_int => .{ .ty = Type.initTag(.type), .val = Value.initTag(.comptime_int_type) },
-                    .comptime_float => .{ .ty = Type.initTag(.type), .val = Value.initTag(.comptime_float_type) },
-                    .@"true" => .{ .ty = Type.initTag(.bool), .val = Value.initTag(.bool_true) },
-                    .@"false" => .{ .ty = Type.initTag(.bool), .val = Value.initTag(.bool_false) },
-                    .@"null" => .{ .ty = Type.initTag(.@"null"), .val = Value.initTag(.null_value) },
-                    .@"undefined" => .{ .ty = Type.initTag(.@"undefined"), .val = Value.initTag(.undef) },
-                    .void_value => .{ .ty = Type.initTag(.void), .val = Value.initTag(.void_value) },
-                };
-            }
-        };
-    };
-
-    pub const Elem = struct {
-        base: Inst,
-
-        positionals: struct {
-            array: *Inst,
-            index: *Inst,
-        },
-        kw_args: struct {},
+    /// Stored inside extra, with trailing arguments according to `args_len`.
+    /// Each argument is a `Ref`.
+    pub const Call = struct {
+        callee: Ref,
+        args_len: u32,
     };
 
+    /// This data is stored inside extra, with two sets of trailing indexes:
+    /// * 0. the then body, according to `then_body_len`.
+    /// * 1. the else body, according to `else_body_len`.
     pub const CondBr = struct {
-        pub const base_tag = Tag.condbr;
-        base: Inst,
-
-        positionals: struct {
-            condition: *Inst,
-            then_body: Body,
-            else_body: Body,
-        },
-        kw_args: struct {},
+        then_body_len: u32,
+        else_body_len: u32,
     };
 
+    /// Stored in extra. Depending on the flags in Data, there will be up to 4
+    /// trailing Ref fields:
+    /// 0. sentinel: Ref // if `has_sentinel` flag is set
+    /// 1. align: Ref // if `has_align` flag is set
+    /// 2. bit_start: Ref // if `has_bit_start` flag is set
+    /// 3. bit_end: Ref // if `has_bit_end` flag is set
     pub const PtrType = struct {
-        pub const base_tag = Tag.ptr_type;
-        base: Inst,
-
-        positionals: struct {
-            child_type: *Inst,
-        },
-        kw_args: struct {
-            @"allowzero": bool = false,
-            @"align": ?*Inst = null,
-            align_bit_start: ?*Inst = null,
-            align_bit_end: ?*Inst = null,
-            mutable: bool = true,
-            @"volatile": bool = false,
-            sentinel: ?*Inst = null,
-            size: std.builtin.TypeInfo.Pointer.Size = .One,
-        },
+        elem_type: Ref,
     };
 
     pub const ArrayTypeSentinel = struct {
-        pub const base_tag = Tag.array_type_sentinel;
-        base: Inst,
-
-        positionals: struct {
-            len: *Inst,
-            sentinel: *Inst,
-            elem_type: *Inst,
-        },
-        kw_args: struct {},
-    };
-
-    pub const EnumLiteral = struct {
-        pub const base_tag = Tag.enum_literal;
-        base: Inst,
-
-        positionals: struct {
-            name: []const u8,
-        },
-        kw_args: struct {},
-    };
-
-    pub const ErrorSet = struct {
-        pub const base_tag = Tag.error_set;
-        base: Inst,
-
-        positionals: struct {
-            fields: [][]const u8,
-        },
-        kw_args: struct {},
+        sentinel: Ref,
+        elem_type: Ref,
     };
 
-    pub const ErrorValue = struct {
-        pub const base_tag = Tag.error_value;
-        base: Inst,
-
-        positionals: struct {
-            name: []const u8,
-        },
-        kw_args: struct {},
-    };
-
-    pub const Slice = struct {
-        pub const base_tag = Tag.slice;
-        base: Inst,
-
-        positionals: struct {
-            array_ptr: *Inst,
-            start: *Inst,
-        },
-        kw_args: struct {
-            end: ?*Inst = null,
-            sentinel: ?*Inst = null,
-        },
-    };
-
-    pub const TypeOfPeer = struct {
-        pub const base_tag = .typeof_peer;
-        base: Inst,
-        positionals: struct {
-            items: []*Inst,
-        },
-        kw_args: struct {},
-    };
-
-    pub const ContainerFieldNamed = struct {
-        pub const base_tag = Tag.container_field_named;
-        base: Inst,
-
-        positionals: struct {
-            bytes: []const u8,
-        },
-        kw_args: struct {},
-    };
-
-    pub const ContainerFieldTyped = struct {
-        pub const base_tag = Tag.container_field_typed;
-        base: Inst,
-
-        positionals: struct {
-            bytes: []const u8,
-            ty: *Inst,
-        },
-        kw_args: struct {},
+    pub const SliceStart = struct {
+        lhs: Ref,
+        start: Ref,
     };
 
-    pub const ContainerField = struct {
-        pub const base_tag = Tag.container_field;
-        base: Inst,
-
-        positionals: struct {
-            bytes: []const u8,
-        },
-        kw_args: struct {
-            ty: ?*Inst = null,
-            init: ?*Inst = null,
-            alignment: ?*Inst = null,
-            is_comptime: bool = false,
-        },
+    pub const SliceEnd = struct {
+        lhs: Ref,
+        start: Ref,
+        end: Ref,
     };
 
-    pub const EnumType = struct {
-        pub const base_tag = Tag.enum_type;
-        base: Inst,
-
-        positionals: struct {
-            fields: []*Inst,
-        },
-        kw_args: struct {
-            tag_type: ?*Inst = null,
-            layout: std.builtin.TypeInfo.ContainerLayout = .Auto,
-        },
+    pub const SliceSentinel = struct {
+        lhs: Ref,
+        start: Ref,
+        end: Ref,
+        sentinel: Ref,
     };
 
-    pub const StructType = struct {
-        pub const base_tag = Tag.struct_type;
-        base: Inst,
-
-        positionals: struct {
-            fields: []*Inst,
-        },
-        kw_args: struct {
-            layout: std.builtin.TypeInfo.ContainerLayout = .Auto,
-        },
-    };
-
-    pub const UnionType = struct {
-        pub const base_tag = Tag.union_type;
-        base: Inst,
-
-        positionals: struct {
-            fields: []*Inst,
-        },
-        kw_args: struct {
-            init_inst: ?*Inst = null,
-            has_enum_token: bool,
-            layout: std.builtin.TypeInfo.ContainerLayout = .Auto,
-        },
+    /// The meaning of these operands depends on the corresponding `Tag`.
+    pub const Bin = struct {
+        lhs: Ref,
+        rhs: Ref,
     };
 
+    /// Stored in extra. Depending on zir tag and len fields, extra fields trail
+    /// this one in the extra array.
+    /// 0. range: Ref // If the tag has "_range" in it.
+    /// 1. else_body: Ref // If the tag has "_else" or "_underscore" in it.
+    /// 2. items: list of all individual items and ranges.
+    /// 3. cases: {
+    ///        item: Ref,
+    ///        body_len: u32,
+    ///        body member Ref for every body_len
+    ///    } for every cases_len
     pub const SwitchBr = struct {
-        base: Inst,
-
-        positionals: struct {
-            target: *Inst,
-            /// List of all individual items and ranges
-            items: []*Inst,
-            cases: []Case,
-            else_body: Body,
-            /// Pointer to first range if such exists.
-            range: ?*Inst = null,
-            special_prong: SpecialProng = .none,
-        },
-        kw_args: struct {},
-
-        pub const SpecialProng = enum {
-            none,
-            @"else",
-            underscore,
-        };
-
-        pub const Case = struct {
-            item: *Inst,
-            body: Body,
-        };
-    };
-};
-
-pub const ErrorMsg = struct {
-    byte_offset: usize,
-    msg: []const u8,
-};
-
-pub const Body = struct {
-    instructions: []*Inst,
-};
-
-pub const Module = struct {
-    decls: []*Decl,
-    arena: std.heap.ArenaAllocator,
-    error_msg: ?ErrorMsg = null,
-    metadata: std.AutoHashMap(*Inst, MetaData),
-    body_metadata: std.AutoHashMap(*Body, BodyMetaData),
-
-    pub const Decl = struct {
-        name: []const u8,
-
-        /// Hash of slice into the source of the part after the = and before the next instruction.
-        contents_hash: std.zig.SrcHash,
-
-        inst: *Inst,
-    };
-
-    pub const MetaData = struct {
-        deaths: ir.Inst.DeathsInt,
-        addr: usize,
-    };
-
-    pub const BodyMetaData = struct {
-        deaths: []*Inst,
+        /// TODO investigate, why do we need to store this? is it redundant?
+        items_len: u32,
+        cases_len: u32,
     };
 
-    pub fn deinit(self: *Module, allocator: *Allocator) void {
-        self.metadata.deinit();
-        self.body_metadata.deinit();
-        allocator.free(self.decls);
-        self.arena.deinit();
-        self.* = undefined;
-    }
-
-    /// This is a debugging utility for rendering the tree to stderr.
-    pub fn dump(self: Module) void {
-        self.writeToStream(std.heap.page_allocator, std.io.getStdErr().writer()) catch {};
-    }
-
-    const DeclAndIndex = struct {
-        decl: *Decl,
-        index: usize,
+    pub const Field = struct {
+        lhs: Ref,
+        /// Offset into `string_bytes`.
+        field_name_start: u32,
+        /// Number of bytes in the string.
+        field_name_len: u32,
     };
 
-    /// TODO Look into making a table to speed this up.
-    pub fn findDecl(self: Module, name: []const u8) ?DeclAndIndex {
-        for (self.decls) |decl, i| {
-            if (mem.eql(u8, decl.name, name)) {
-                return DeclAndIndex{
-                    .decl = decl,
-                    .index = i,
-                };
-            }
-        }
-        return null;
-    }
-
-    pub fn findInstDecl(self: Module, inst: *Inst) ?DeclAndIndex {
-        for (self.decls) |decl, i| {
-            if (decl.inst == inst) {
-                return DeclAndIndex{
-                    .decl = decl,
-                    .index = i,
-                };
-            }
-        }
-        return null;
-    }
-
-    /// The allocator is used for temporary storage, but this function always returns
-    /// with no resources allocated.
-    pub fn writeToStream(self: Module, allocator: *Allocator, stream: anytype) !void {
-        var write = Writer{
-            .module = &self,
-            .inst_table = InstPtrTable.init(allocator),
-            .block_table = std.AutoHashMap(*Inst.Block, []const u8).init(allocator),
-            .loop_table = std.AutoHashMap(*Inst.Loop, []const u8).init(allocator),
-            .arena = std.heap.ArenaAllocator.init(allocator),
-            .indent = 2,
-            .next_instr_index = undefined,
-        };
-        defer write.arena.deinit();
-        defer write.inst_table.deinit();
-        defer write.block_table.deinit();
-        defer write.loop_table.deinit();
-
-        // First, build a map of *Inst to @ or % indexes
-        try write.inst_table.ensureCapacity(@intCast(u32, self.decls.len));
-
-        for (self.decls) |decl, decl_i| {
-            try write.inst_table.putNoClobber(decl.inst, .{ .inst = decl.inst, .index = null, .name = decl.name });
-        }
-
-        for (self.decls) |decl, i| {
-            write.next_instr_index = 0;
-            try stream.print("@{s} ", .{decl.name});
-            try write.writeInstToStream(stream, decl.inst);
-            try stream.writeByte('\n');
-        }
-    }
-};
-
-const InstPtrTable = std.AutoHashMap(*Inst, struct { inst: *Inst, index: ?usize, name: []const u8 });
-
-const Writer = struct {
-    module: *const Module,
-    inst_table: InstPtrTable,
-    block_table: std.AutoHashMap(*Inst.Block, []const u8),
-    loop_table: std.AutoHashMap(*Inst.Loop, []const u8),
-    arena: std.heap.ArenaAllocator,
-    indent: usize,
-    next_instr_index: usize,
-
-    fn writeInstToStream(
-        self: *Writer,
-        stream: anytype,
-        inst: *Inst,
-    ) (@TypeOf(stream).Error || error{OutOfMemory})!void {
-        inline for (@typeInfo(Inst.Tag).Enum.fields) |enum_field| {
-            const expected_tag = @field(Inst.Tag, enum_field.name);
-            if (inst.tag == expected_tag) {
-                return self.writeInstToStreamGeneric(stream, expected_tag, inst);
-            }
-        }
-        unreachable; // all tags handled
-    }
-
-    fn writeInstToStreamGeneric(
-        self: *Writer,
-        stream: anytype,
-        comptime inst_tag: Inst.Tag,
-        base: *Inst,
-    ) (@TypeOf(stream).Error || error{OutOfMemory})!void {
-        const SpecificInst = inst_tag.Type();
-        const inst = @fieldParentPtr(SpecificInst, "base", base);
-        const Positionals = @TypeOf(inst.positionals);
-        try stream.writeAll("= " ++ @tagName(inst_tag) ++ "(");
-        const pos_fields = @typeInfo(Positionals).Struct.fields;
-        inline for (pos_fields) |arg_field, i| {
-            if (i != 0) {
-                try stream.writeAll(", ");
-            }
-            try self.writeParamToStream(stream, &@field(inst.positionals, arg_field.name));
-        }
-
-        comptime var need_comma = pos_fields.len != 0;
-        const KW_Args = @TypeOf(inst.kw_args);
-        inline for (@typeInfo(KW_Args).Struct.fields) |arg_field, i| {
-            if (@typeInfo(arg_field.field_type) == .Optional) {
-                if (@field(inst.kw_args, arg_field.name)) |non_optional| {
-                    if (need_comma) try stream.writeAll(", ");
-                    try stream.print("{s}=", .{arg_field.name});
-                    try self.writeParamToStream(stream, &non_optional);
-                    need_comma = true;
-                }
-            } else {
-                if (need_comma) try stream.writeAll(", ");
-                try stream.print("{s}=", .{arg_field.name});
-                try self.writeParamToStream(stream, &@field(inst.kw_args, arg_field.name));
-                need_comma = true;
-            }
-        }
-
-        try stream.writeByte(')');
-    }
-
-    fn writeParamToStream(self: *Writer, stream: anytype, param_ptr: anytype) !void {
-        const param = param_ptr.*;
-        if (@typeInfo(@TypeOf(param)) == .Enum) {
-            return stream.writeAll(@tagName(param));
-        }
-        switch (@TypeOf(param)) {
-            *Inst => return self.writeInstParamToStream(stream, param),
-            ?*Inst => return self.writeInstParamToStream(stream, param.?),
-            []*Inst => {
-                try stream.writeByte('[');
-                for (param) |inst, i| {
-                    if (i != 0) {
-                        try stream.writeAll(", ");
-                    }
-                    try self.writeInstParamToStream(stream, inst);
-                }
-                try stream.writeByte(']');
-            },
-            Body => {
-                try stream.writeAll("{\n");
-                if (self.module.body_metadata.get(param_ptr)) |metadata| {
-                    if (metadata.deaths.len > 0) {
-                        try stream.writeByteNTimes(' ', self.indent);
-                        try stream.writeAll("; deaths={");
-                        for (metadata.deaths) |death, i| {
-                            if (i != 0) try stream.writeAll(", ");
-                            try self.writeInstParamToStream(stream, death);
-                        }
-                        try stream.writeAll("}\n");
-                    }
-                }
-
-                for (param.instructions) |inst| {
-                    const my_i = self.next_instr_index;
-                    self.next_instr_index += 1;
-                    try self.inst_table.putNoClobber(inst, .{ .inst = inst, .index = my_i, .name = undefined });
-                    try stream.writeByteNTimes(' ', self.indent);
-                    try stream.print("%{d} ", .{my_i});
-                    if (inst.cast(Inst.Block)) |block| {
-                        const name = try std.fmt.allocPrint(&self.arena.allocator, "label_{d}", .{my_i});
-                        try self.block_table.put(block, name);
-                    } else if (inst.cast(Inst.Loop)) |loop| {
-                        const name = try std.fmt.allocPrint(&self.arena.allocator, "loop_{d}", .{my_i});
-                        try self.loop_table.put(loop, name);
-                    }
-                    self.indent += 2;
-                    try self.writeInstToStream(stream, inst);
-                    if (self.module.metadata.get(inst)) |metadata| {
-                        try stream.print(" ; deaths=0b{b}", .{metadata.deaths});
-                        // This is conditionally compiled in because addresses mess up the tests due
-                        // to Address Space Layout Randomization. It's super useful when debugging
-                        // codegen.zig though.
-                        if (!std.builtin.is_test) {
-                            try stream.print(" 0x{x}", .{metadata.addr});
-                        }
-                    }
-                    self.indent -= 2;
-                    try stream.writeByte('\n');
-                }
-                try stream.writeByteNTimes(' ', self.indent - 2);
-                try stream.writeByte('}');
-            },
-            bool => return stream.writeByte("01"[@boolToInt(param)]),
-            []u8, []const u8 => return stream.print("\"{}\"", .{std.zig.fmtEscapes(param)}),
-            BigIntConst, usize => return stream.print("{}", .{param}),
-            TypedValue => return stream.print("TypedValue{{ .ty = {}, .val = {}}}", .{ param.ty, param.val }),
-            *IrModule.Decl => return stream.print("Decl({s})", .{param.name}),
-            *Inst.Block => {
-                const name = self.block_table.get(param) orelse "!BADREF!";
-                return stream.print("\"{}\"", .{std.zig.fmtEscapes(name)});
-            },
-            *Inst.Loop => {
-                const name = self.loop_table.get(param).?;
-                return stream.print("\"{}\"", .{std.zig.fmtEscapes(name)});
-            },
-            [][]const u8, []const []const u8 => {
-                try stream.writeByte('[');
-                for (param) |str, i| {
-                    if (i != 0) {
-                        try stream.writeAll(", ");
-                    }
-                    try stream.print("\"{}\"", .{std.zig.fmtEscapes(str)});
-                }
-                try stream.writeByte(']');
-            },
-            []Inst.SwitchBr.Case => {
-                if (param.len == 0) {
-                    return stream.writeAll("{}");
-                }
-                try stream.writeAll("{\n");
-                for (param) |*case, i| {
-                    if (i != 0) {
-                        try stream.writeAll(",\n");
-                    }
-                    try stream.writeByteNTimes(' ', self.indent);
-                    self.indent += 2;
-                    try self.writeParamToStream(stream, &case.item);
-                    try stream.writeAll(" => ");
-                    try self.writeParamToStream(stream, &case.body);
-                    self.indent -= 2;
-                }
-                try stream.writeByte('\n');
-                try stream.writeByteNTimes(' ', self.indent - 2);
-                try stream.writeByte('}');
-            },
-            else => |T| @compileError("unimplemented: rendering parameter of type " ++ @typeName(T)),
-        }
-    }
-
-    fn writeInstParamToStream(self: *Writer, stream: anytype, inst: *Inst) !void {
-        if (self.inst_table.get(inst)) |info| {
-            if (info.index) |i| {
-                try stream.print("%{d}", .{info.index});
-            } else {
-                try stream.print("@{s}", .{info.name});
-            }
-        } else if (inst.cast(Inst.DeclVal)) |decl_val| {
-            try stream.print("@{s}", .{decl_val.positionals.decl.name});
-        } else {
-            // This should be unreachable in theory, but since ZIR is used for debugging the compiler
-            // we output some debug text instead.
-            try stream.print("?{s}?", .{@tagName(inst.tag)});
-        }
-    }
-};
-
-/// For debugging purposes, prints a function representation to stderr.
-pub fn dumpFn(old_module: IrModule, module_fn: *IrModule.Fn) void {
-    const allocator = old_module.gpa;
-    var ctx: DumpTzir = .{
-        .allocator = allocator,
-        .arena = std.heap.ArenaAllocator.init(allocator),
-        .old_module = &old_module,
-        .module_fn = module_fn,
-        .indent = 2,
-        .inst_table = DumpTzir.InstTable.init(allocator),
-        .partial_inst_table = DumpTzir.InstTable.init(allocator),
-        .const_table = DumpTzir.InstTable.init(allocator),
+    pub const FieldNamed = struct {
+        lhs: Ref,
+        field_name: Ref,
     };
-    defer ctx.inst_table.deinit();
-    defer ctx.partial_inst_table.deinit();
-    defer ctx.const_table.deinit();
-    defer ctx.arena.deinit();
-
-    switch (module_fn.state) {
-        .queued => std.debug.print("(queued)", .{}),
-        .inline_only => std.debug.print("(inline_only)", .{}),
-        .in_progress => std.debug.print("(in_progress)", .{}),
-        .sema_failure => std.debug.print("(sema_failure)", .{}),
-        .dependency_failure => std.debug.print("(dependency_failure)", .{}),
-        .success => {
-            const writer = std.io.getStdErr().writer();
-            ctx.dump(module_fn.body, writer) catch @panic("failed to dump TZIR");
-        },
-    }
-}
-
-const DumpTzir = struct {
-    allocator: *Allocator,
-    arena: std.heap.ArenaAllocator,
-    old_module: *const IrModule,
-    module_fn: *IrModule.Fn,
-    indent: usize,
-    inst_table: InstTable,
-    partial_inst_table: InstTable,
-    const_table: InstTable,
-    next_index: usize = 0,
-    next_partial_index: usize = 0,
-    next_const_index: usize = 0,
-
-    const InstTable = std.AutoArrayHashMap(*ir.Inst, usize);
-
-    /// TODO: Improve this code to include a stack of ir.Body and store the instructions
-    /// in there. Now we are putting all the instructions in a function local table,
-    /// however instructions that are in a Body can be thown away when the Body ends.
-    fn dump(dtz: *DumpTzir, body: ir.Body, writer: std.fs.File.Writer) !void {
-        // First pass to pre-populate the table so that we can show even invalid references.
-        // Must iterate the same order we iterate the second time.
-        // We also look for constants and put them in the const_table.
-        try dtz.fetchInstsAndResolveConsts(body);
-
-        std.debug.print("Module.Function(name={s}):\n", .{dtz.module_fn.owner_decl.name});
-
-        for (dtz.const_table.items()) |entry| {
-            const constant = entry.key.castTag(.constant).?;
-            try writer.print("  @{d}: {} = {};\n", .{
-                entry.value, constant.base.ty, constant.val,
-            });
-        }
-
-        return dtz.dumpBody(body, writer);
-    }
-
-    fn fetchInstsAndResolveConsts(dtz: *DumpTzir, body: ir.Body) error{OutOfMemory}!void {
-        for (body.instructions) |inst| {
-            try dtz.inst_table.put(inst, dtz.next_index);
-            dtz.next_index += 1;
-            switch (inst.tag) {
-                .alloc,
-                .retvoid,
-                .unreach,
-                .breakpoint,
-                .dbg_stmt,
-                .arg,
-                => {},
-
-                .ref,
-                .ret,
-                .bitcast,
-                .not,
-                .is_non_null,
-                .is_non_null_ptr,
-                .is_null,
-                .is_null_ptr,
-                .is_err,
-                .is_err_ptr,
-                .ptrtoint,
-                .floatcast,
-                .intcast,
-                .load,
-                .optional_payload,
-                .optional_payload_ptr,
-                .wrap_optional,
-                .wrap_errunion_payload,
-                .wrap_errunion_err,
-                .unwrap_errunion_payload,
-                .unwrap_errunion_err,
-                .unwrap_errunion_payload_ptr,
-                .unwrap_errunion_err_ptr,
-                => {
-                    const un_op = inst.cast(ir.Inst.UnOp).?;
-                    try dtz.findConst(un_op.operand);
-                },
-
-                .add,
-                .addwrap,
-                .sub,
-                .subwrap,
-                .mul,
-                .mulwrap,
-                .cmp_lt,
-                .cmp_lte,
-                .cmp_eq,
-                .cmp_gte,
-                .cmp_gt,
-                .cmp_neq,
-                .store,
-                .bool_and,
-                .bool_or,
-                .bit_and,
-                .bit_or,
-                .xor,
-                => {
-                    const bin_op = inst.cast(ir.Inst.BinOp).?;
-                    try dtz.findConst(bin_op.lhs);
-                    try dtz.findConst(bin_op.rhs);
-                },
-
-                .br => {
-                    const br = inst.castTag(.br).?;
-                    try dtz.findConst(&br.block.base);
-                    try dtz.findConst(br.operand);
-                },
-
-                .br_block_flat => {
-                    const br_block_flat = inst.castTag(.br_block_flat).?;
-                    try dtz.findConst(&br_block_flat.block.base);
-                    try dtz.fetchInstsAndResolveConsts(br_block_flat.body);
-                },
-
-                .br_void => {
-                    const br_void = inst.castTag(.br_void).?;
-                    try dtz.findConst(&br_void.block.base);
-                },
-
-                .block => {
-                    const block = inst.castTag(.block).?;
-                    try dtz.fetchInstsAndResolveConsts(block.body);
-                },
-
-                .condbr => {
-                    const condbr = inst.castTag(.condbr).?;
-                    try dtz.findConst(condbr.condition);
-                    try dtz.fetchInstsAndResolveConsts(condbr.then_body);
-                    try dtz.fetchInstsAndResolveConsts(condbr.else_body);
-                },
-
-                .loop => {
-                    const loop = inst.castTag(.loop).?;
-                    try dtz.fetchInstsAndResolveConsts(loop.body);
-                },
-                .call => {
-                    const call = inst.castTag(.call).?;
-                    try dtz.findConst(call.func);
-                    for (call.args) |arg| {
-                        try dtz.findConst(arg);
-                    }
-                },
-
-                // TODO fill out this debug printing
-                .assembly,
-                .constant,
-                .varptr,
-                .switchbr,
-                => {},
-            }
-        }
-    }
-
-    fn dumpBody(dtz: *DumpTzir, body: ir.Body, writer: std.fs.File.Writer) (std.fs.File.WriteError || error{OutOfMemory})!void {
-        for (body.instructions) |inst| {
-            const my_index = dtz.next_partial_index;
-            try dtz.partial_inst_table.put(inst, my_index);
-            dtz.next_partial_index += 1;
-
-            try writer.writeByteNTimes(' ', dtz.indent);
-            try writer.print("%{d}: {} = {s}(", .{
-                my_index, inst.ty, @tagName(inst.tag),
-            });
-            switch (inst.tag) {
-                .alloc,
-                .retvoid,
-                .unreach,
-                .breakpoint,
-                .dbg_stmt,
-                => try writer.writeAll(")\n"),
-
-                .ref,
-                .ret,
-                .bitcast,
-                .not,
-                .is_non_null,
-                .is_null,
-                .is_non_null_ptr,
-                .is_null_ptr,
-                .is_err,
-                .is_err_ptr,
-                .ptrtoint,
-                .floatcast,
-                .intcast,
-                .load,
-                .optional_payload,
-                .optional_payload_ptr,
-                .wrap_optional,
-                .wrap_errunion_err,
-                .wrap_errunion_payload,
-                .unwrap_errunion_err,
-                .unwrap_errunion_payload,
-                .unwrap_errunion_payload_ptr,
-                .unwrap_errunion_err_ptr,
-                => {
-                    const un_op = inst.cast(ir.Inst.UnOp).?;
-                    const kinky = try dtz.writeInst(writer, un_op.operand);
-                    if (kinky != null) {
-                        try writer.writeAll(") // Instruction does not dominate all uses!\n");
-                    } else {
-                        try writer.writeAll(")\n");
-                    }
-                },
-
-                .add,
-                .addwrap,
-                .sub,
-                .subwrap,
-                .mul,
-                .mulwrap,
-                .cmp_lt,
-                .cmp_lte,
-                .cmp_eq,
-                .cmp_gte,
-                .cmp_gt,
-                .cmp_neq,
-                .store,
-                .bool_and,
-                .bool_or,
-                .bit_and,
-                .bit_or,
-                .xor,
-                => {
-                    const bin_op = inst.cast(ir.Inst.BinOp).?;
-
-                    const lhs_kinky = try dtz.writeInst(writer, bin_op.lhs);
-                    try writer.writeAll(", ");
-                    const rhs_kinky = try dtz.writeInst(writer, bin_op.rhs);
-
-                    if (lhs_kinky != null or rhs_kinky != null) {
-                        try writer.writeAll(") // Instruction does not dominate all uses!");
-                        if (lhs_kinky) |lhs| {
-                            try writer.print(" %{d}", .{lhs});
-                        }
-                        if (rhs_kinky) |rhs| {
-                            try writer.print(" %{d}", .{rhs});
-                        }
-                        try writer.writeAll("\n");
-                    } else {
-                        try writer.writeAll(")\n");
-                    }
-                },
-
-                .arg => {
-                    const arg = inst.castTag(.arg).?;
-                    try writer.print("{s})\n", .{arg.name});
-                },
-
-                .br => {
-                    const br = inst.castTag(.br).?;
-
-                    const lhs_kinky = try dtz.writeInst(writer, &br.block.base);
-                    try writer.writeAll(", ");
-                    const rhs_kinky = try dtz.writeInst(writer, br.operand);
-
-                    if (lhs_kinky != null or rhs_kinky != null) {
-                        try writer.writeAll(") // Instruction does not dominate all uses!");
-                        if (lhs_kinky) |lhs| {
-                            try writer.print(" %{d}", .{lhs});
-                        }
-                        if (rhs_kinky) |rhs| {
-                            try writer.print(" %{d}", .{rhs});
-                        }
-                        try writer.writeAll("\n");
-                    } else {
-                        try writer.writeAll(")\n");
-                    }
-                },
-
-                .br_block_flat => {
-                    const br_block_flat = inst.castTag(.br_block_flat).?;
-                    const block_kinky = try dtz.writeInst(writer, &br_block_flat.block.base);
-                    if (block_kinky != null) {
-                        try writer.writeAll(", { // Instruction does not dominate all uses!\n");
-                    } else {
-                        try writer.writeAll(", {\n");
-                    }
-
-                    const old_indent = dtz.indent;
-                    dtz.indent += 2;
-                    try dtz.dumpBody(br_block_flat.body, writer);
-                    dtz.indent = old_indent;
-
-                    try writer.writeByteNTimes(' ', dtz.indent);
-                    try writer.writeAll("})\n");
-                },
-
-                .br_void => {
-                    const br_void = inst.castTag(.br_void).?;
-                    const kinky = try dtz.writeInst(writer, &br_void.block.base);
-                    if (kinky) |_| {
-                        try writer.writeAll(") // Instruction does not dominate all uses!\n");
-                    } else {
-                        try writer.writeAll(")\n");
-                    }
-                },
-
-                .block => {
-                    const block = inst.castTag(.block).?;
-
-                    try writer.writeAll("{\n");
-
-                    const old_indent = dtz.indent;
-                    dtz.indent += 2;
-                    try dtz.dumpBody(block.body, writer);
-                    dtz.indent = old_indent;
-
-                    try writer.writeByteNTimes(' ', dtz.indent);
-                    try writer.writeAll("})\n");
-                },
-
-                .condbr => {
-                    const condbr = inst.castTag(.condbr).?;
-
-                    const condition_kinky = try dtz.writeInst(writer, condbr.condition);
-                    if (condition_kinky != null) {
-                        try writer.writeAll(", { // Instruction does not dominate all uses!\n");
-                    } else {
-                        try writer.writeAll(", {\n");
-                    }
-
-                    const old_indent = dtz.indent;
-                    dtz.indent += 2;
-                    try dtz.dumpBody(condbr.then_body, writer);
-
-                    try writer.writeByteNTimes(' ', old_indent);
-                    try writer.writeAll("}, {\n");
-
-                    try dtz.dumpBody(condbr.else_body, writer);
-                    dtz.indent = old_indent;
-
-                    try writer.writeByteNTimes(' ', old_indent);
-                    try writer.writeAll("})\n");
-                },
-
-                .loop => {
-                    const loop = inst.castTag(.loop).?;
-
-                    try writer.writeAll("{\n");
-
-                    const old_indent = dtz.indent;
-                    dtz.indent += 2;
-                    try dtz.dumpBody(loop.body, writer);
-                    dtz.indent = old_indent;
-
-                    try writer.writeByteNTimes(' ', dtz.indent);
-                    try writer.writeAll("})\n");
-                },
-
-                .call => {
-                    const call = inst.castTag(.call).?;
-
-                    const args_kinky = try dtz.allocator.alloc(?usize, call.args.len);
-                    defer dtz.allocator.free(args_kinky);
-                    std.mem.set(?usize, args_kinky, null);
-                    var any_kinky_args = false;
-
-                    const func_kinky = try dtz.writeInst(writer, call.func);
-
-                    for (call.args) |arg, i| {
-                        try writer.writeAll(", ");
-
-                        args_kinky[i] = try dtz.writeInst(writer, arg);
-                        any_kinky_args = any_kinky_args or args_kinky[i] != null;
-                    }
-
-                    if (func_kinky != null or any_kinky_args) {
-                        try writer.writeAll(") // Instruction does not dominate all uses!");
-                        if (func_kinky) |func_index| {
-                            try writer.print(" %{d}", .{func_index});
-                        }
-                        for (args_kinky) |arg_kinky| {
-                            if (arg_kinky) |arg_index| {
-                                try writer.print(" %{d}", .{arg_index});
-                            }
-                        }
-                        try writer.writeAll("\n");
-                    } else {
-                        try writer.writeAll(")\n");
-                    }
-                },
-
-                // TODO fill out this debug printing
-                .assembly,
-                .constant,
-                .varptr,
-                .switchbr,
-                => {
-                    try writer.writeAll("!TODO!)\n");
-                },
-            }
-        }
-    }
-
-    fn writeInst(dtz: *DumpTzir, writer: std.fs.File.Writer, inst: *ir.Inst) !?usize {
-        if (dtz.partial_inst_table.get(inst)) |operand_index| {
-            try writer.print("%{d}", .{operand_index});
-            return null;
-        } else if (dtz.const_table.get(inst)) |operand_index| {
-            try writer.print("@{d}", .{operand_index});
-            return null;
-        } else if (dtz.inst_table.get(inst)) |operand_index| {
-            try writer.print("%{d}", .{operand_index});
-            return operand_index;
-        } else {
-            try writer.writeAll("!BADREF!");
-            return null;
-        }
-    }
-
-    fn findConst(dtz: *DumpTzir, operand: *ir.Inst) !void {
-        if (operand.tag == .constant) {
-            try dtz.const_table.put(operand, dtz.next_const_index);
-            dtz.next_const_index += 1;
-        }
-    }
 };
 
 /// For debugging purposes, like dumpFn but for unanalyzed zir blocks
-pub fn dumpZir(allocator: *Allocator, kind: []const u8, decl_name: [*:0]const u8, instructions: []*Inst) !void {
+pub fn dumpZir(gpa: *Allocator, kind: []const u8, decl_name: [*:0]const u8, instructions: []*Inst) !void {
     var fib = std.heap.FixedBufferAllocator.init(&[_]u8{});
     var module = Module{
         .decls = &[_]*Module.Decl{},
@@ -2030,10 +1190,10 @@ pub fn dumpZir(allocator: *Allocator, kind: []const u8, decl_name: [*:0]const u8
     };
     var write = Writer{
         .module = &module,
-        .inst_table = InstPtrTable.init(allocator),
-        .block_table = std.AutoHashMap(*Inst.Block, []const u8).init(allocator),
-        .loop_table = std.AutoHashMap(*Inst.Loop, []const u8).init(allocator),
-        .arena = std.heap.ArenaAllocator.init(allocator),
+        .inst_table = InstPtrTable.init(gpa),
+        .block_table = std.AutoHashMap(*Inst.Block, []const u8).init(gpa),
+        .loop_table = std.AutoHashMap(*Inst.Loop, []const u8).init(gpa),
+        .arena = std.heap.ArenaAllocator.init(gpa),
         .indent = 4,
         .next_instr_index = 0,
     };
src/zir_sema.zig
@@ -1,11 +1,36 @@
 //! Semantic analysis of ZIR instructions.
-//! This file operates on a `Module` instance, transforming untyped ZIR
-//! instructions into semantically-analyzed IR instructions. It does type
-//! checking, comptime control flow, and safety-check generation. This is the
-//! the heart of the Zig compiler.
-//! When deciding if something goes into this file or into Module, here is a
-//! guiding principle: if it has to do with (untyped) ZIR instructions, it goes
-//! here. If the analysis operates on typed IR instructions, it goes in Module.
+//! Shared to every Block. Stored on the stack.
+//! State used for compiling a `zir.Code` into TZIR.
+//! Transforms untyped ZIR instructions into semantically-analyzed TZIR instructions.
+//! Does type checking, comptime control flow, and safety-check generation.
+//! This is the the heart of the Zig compiler.
+
+mod: *Module,
+/// Same as `mod.gpa`.
+gpa: *Allocator,
+/// Points to the arena allocator of the Decl.
+arena: *Allocator,
+code: zir.Code,
+/// Maps ZIR to TZIR.
+inst_map: []*const Inst,
+/// When analyzing an inline function call, owner_decl is the Decl of the caller
+/// and `src_decl` of `Scope.Block` is the `Decl` of the callee.
+/// This `Decl` owns the arena memory of this `Sema`.
+owner_decl: *Decl,
+func: ?*Module.Fn,
+/// For now, TZIR requires arg instructions to be the first N instructions in the
+/// TZIR code. We store references here for the purpose of `resolveInst`.
+/// This can get reworked with TZIR memory layout changes, into simply:
+/// > Denormalized data to make `resolveInst` faster. This is 0 if not inside a function,
+/// > otherwise it is the number of parameters of the function.
+/// > param_count: u32
+param_inst_list: []const *ir.Inst,
+branch_quota: u32 = 1000,
+/// This field is updated when a new source location becomes active, so that
+/// instructions which do not have explicitly mapped source locations still have
+/// access to the source location set by the previous instruction which did
+/// contain a mapped source location.
+src: LazySrcLoc = .{ .token_offset = 0 },
 
 const std = @import("std");
 const mem = std.mem;
@@ -13,6 +38,7 @@ const Allocator = std.mem.Allocator;
 const assert = std.debug.assert;
 const log = std.log.scoped(.sema);
 
+const Sema = @This();
 const Value = @import("value.zig").Value;
 const Type = @import("type.zig").Type;
 const TypedValue = @import("TypedValue.zig");
@@ -25,340 +51,408 @@ const trace = @import("tracy.zig").trace;
 const Scope = Module.Scope;
 const InnerError = Module.InnerError;
 const Decl = Module.Decl;
+const LazySrcLoc = Module.LazySrcLoc;
 
-pub fn analyzeInst(mod: *Module, scope: *Scope, old_inst: *zir.Inst) InnerError!*Inst {
-    switch (old_inst.tag) {
-        .alloc => return zirAlloc(mod, scope, old_inst.castTag(.alloc).?),
-        .alloc_mut => return zirAllocMut(mod, scope, old_inst.castTag(.alloc_mut).?),
-        .alloc_inferred => return zirAllocInferred(mod, scope, old_inst.castTag(.alloc_inferred).?, .inferred_alloc_const),
-        .alloc_inferred_mut => return zirAllocInferred(mod, scope, old_inst.castTag(.alloc_inferred_mut).?, .inferred_alloc_mut),
-        .arg => return zirArg(mod, scope, old_inst.castTag(.arg).?),
-        .bitcast_ref => return zirBitcastRef(mod, scope, old_inst.castTag(.bitcast_ref).?),
-        .bitcast_result_ptr => return zirBitcastResultPtr(mod, scope, old_inst.castTag(.bitcast_result_ptr).?),
-        .block => return zirBlock(mod, scope, old_inst.castTag(.block).?, false),
-        .block_comptime => return zirBlock(mod, scope, old_inst.castTag(.block_comptime).?, true),
-        .block_flat => return zirBlockFlat(mod, scope, old_inst.castTag(.block_flat).?, false),
-        .block_comptime_flat => return zirBlockFlat(mod, scope, old_inst.castTag(.block_comptime_flat).?, true),
-        .@"break" => return zirBreak(mod, scope, old_inst.castTag(.@"break").?),
-        .breakpoint => return zirBreakpoint(mod, scope, old_inst.castTag(.breakpoint).?),
-        .break_void => return zirBreakVoid(mod, scope, old_inst.castTag(.break_void).?),
-        .call => return zirCall(mod, scope, old_inst.castTag(.call).?),
-        .coerce_result_ptr => return zirCoerceResultPtr(mod, scope, old_inst.castTag(.coerce_result_ptr).?),
-        .compile_error => return zirCompileError(mod, scope, old_inst.castTag(.compile_error).?),
-        .compile_log => return zirCompileLog(mod, scope, old_inst.castTag(.compile_log).?),
-        .@"const" => return zirConst(mod, scope, old_inst.castTag(.@"const").?),
-        .dbg_stmt => return zirDbgStmt(mod, scope, old_inst.castTag(.dbg_stmt).?),
-        .decl_ref => return zirDeclRef(mod, scope, old_inst.castTag(.decl_ref).?),
-        .decl_ref_str => return zirDeclRefStr(mod, scope, old_inst.castTag(.decl_ref_str).?),
-        .decl_val => return zirDeclVal(mod, scope, old_inst.castTag(.decl_val).?),
-        .ensure_result_used => return zirEnsureResultUsed(mod, scope, old_inst.castTag(.ensure_result_used).?),
-        .ensure_result_non_error => return zirEnsureResultNonError(mod, scope, old_inst.castTag(.ensure_result_non_error).?),
-        .indexable_ptr_len => return zirIndexablePtrLen(mod, scope, old_inst.castTag(.indexable_ptr_len).?),
-        .ref => return zirRef(mod, scope, old_inst.castTag(.ref).?),
-        .resolve_inferred_alloc => return zirResolveInferredAlloc(mod, scope, old_inst.castTag(.resolve_inferred_alloc).?),
-        .ret_ptr => return zirRetPtr(mod, scope, old_inst.castTag(.ret_ptr).?),
-        .ret_type => return zirRetType(mod, scope, old_inst.castTag(.ret_type).?),
-        .store_to_block_ptr => return zirStoreToBlockPtr(mod, scope, old_inst.castTag(.store_to_block_ptr).?),
-        .store_to_inferred_ptr => return zirStoreToInferredPtr(mod, scope, old_inst.castTag(.store_to_inferred_ptr).?),
-        .single_const_ptr_type => return zirSimplePtrType(mod, scope, old_inst.castTag(.single_const_ptr_type).?, false, .One),
-        .single_mut_ptr_type => return zirSimplePtrType(mod, scope, old_inst.castTag(.single_mut_ptr_type).?, true, .One),
-        .many_const_ptr_type => return zirSimplePtrType(mod, scope, old_inst.castTag(.many_const_ptr_type).?, false, .Many),
-        .many_mut_ptr_type => return zirSimplePtrType(mod, scope, old_inst.castTag(.many_mut_ptr_type).?, true, .Many),
-        .c_const_ptr_type => return zirSimplePtrType(mod, scope, old_inst.castTag(.c_const_ptr_type).?, false, .C),
-        .c_mut_ptr_type => return zirSimplePtrType(mod, scope, old_inst.castTag(.c_mut_ptr_type).?, true, .C),
-        .const_slice_type => return zirSimplePtrType(mod, scope, old_inst.castTag(.const_slice_type).?, false, .Slice),
-        .mut_slice_type => return zirSimplePtrType(mod, scope, old_inst.castTag(.mut_slice_type).?, true, .Slice),
-        .ptr_type => return zirPtrType(mod, scope, old_inst.castTag(.ptr_type).?),
-        .store => return zirStore(mod, scope, old_inst.castTag(.store).?),
-        .set_eval_branch_quota => return zirSetEvalBranchQuota(mod, scope, old_inst.castTag(.set_eval_branch_quota).?),
-        .str => return zirStr(mod, scope, old_inst.castTag(.str).?),
-        .int => return zirInt(mod, scope, old_inst.castTag(.int).?),
-        .int_type => return zirIntType(mod, scope, old_inst.castTag(.int_type).?),
-        .loop => return zirLoop(mod, scope, old_inst.castTag(.loop).?),
-        .param_type => return zirParamType(mod, scope, old_inst.castTag(.param_type).?),
-        .ptrtoint => return zirPtrtoint(mod, scope, old_inst.castTag(.ptrtoint).?),
-        .field_ptr => return zirFieldPtr(mod, scope, old_inst.castTag(.field_ptr).?),
-        .field_val => return zirFieldVal(mod, scope, old_inst.castTag(.field_val).?),
-        .field_ptr_named => return zirFieldPtrNamed(mod, scope, old_inst.castTag(.field_ptr_named).?),
-        .field_val_named => return zirFieldValNamed(mod, scope, old_inst.castTag(.field_val_named).?),
-        .deref => return zirDeref(mod, scope, old_inst.castTag(.deref).?),
-        .as => return zirAs(mod, scope, old_inst.castTag(.as).?),
-        .@"asm" => return zirAsm(mod, scope, old_inst.castTag(.@"asm").?),
-        .unreachable_safe => return zirUnreachable(mod, scope, old_inst.castTag(.unreachable_safe).?, true),
-        .unreachable_unsafe => return zirUnreachable(mod, scope, old_inst.castTag(.unreachable_unsafe).?, false),
-        .@"return" => return zirReturn(mod, scope, old_inst.castTag(.@"return").?),
-        .return_void => return zirReturnVoid(mod, scope, old_inst.castTag(.return_void).?),
-        .@"fn" => return zirFn(mod, scope, old_inst.castTag(.@"fn").?),
-        .@"export" => return zirExport(mod, scope, old_inst.castTag(.@"export").?),
-        .primitive => return zirPrimitive(mod, scope, old_inst.castTag(.primitive).?),
-        .fn_type => return zirFnType(mod, scope, old_inst.castTag(.fn_type).?, false),
-        .fn_type_cc => return zirFnTypeCc(mod, scope, old_inst.castTag(.fn_type_cc).?, false),
-        .fn_type_var_args => return zirFnType(mod, scope, old_inst.castTag(.fn_type_var_args).?, true),
-        .fn_type_cc_var_args => return zirFnTypeCc(mod, scope, old_inst.castTag(.fn_type_cc_var_args).?, true),
-        .intcast => return zirIntcast(mod, scope, old_inst.castTag(.intcast).?),
-        .bitcast => return zirBitcast(mod, scope, old_inst.castTag(.bitcast).?),
-        .floatcast => return zirFloatcast(mod, scope, old_inst.castTag(.floatcast).?),
-        .elem_ptr => return zirElemPtr(mod, scope, old_inst.castTag(.elem_ptr).?),
-        .elem_val => return zirElemVal(mod, scope, old_inst.castTag(.elem_val).?),
-        .add => return zirArithmetic(mod, scope, old_inst.castTag(.add).?),
-        .addwrap => return zirArithmetic(mod, scope, old_inst.castTag(.addwrap).?),
-        .sub => return zirArithmetic(mod, scope, old_inst.castTag(.sub).?),
-        .subwrap => return zirArithmetic(mod, scope, old_inst.castTag(.subwrap).?),
-        .mul => return zirArithmetic(mod, scope, old_inst.castTag(.mul).?),
-        .mulwrap => return zirArithmetic(mod, scope, old_inst.castTag(.mulwrap).?),
-        .div => return zirArithmetic(mod, scope, old_inst.castTag(.div).?),
-        .mod_rem => return zirArithmetic(mod, scope, old_inst.castTag(.mod_rem).?),
-        .array_cat => return zirArrayCat(mod, scope, old_inst.castTag(.array_cat).?),
-        .array_mul => return zirArrayMul(mod, scope, old_inst.castTag(.array_mul).?),
-        .bit_and => return zirBitwise(mod, scope, old_inst.castTag(.bit_and).?),
-        .bit_not => return zirBitNot(mod, scope, old_inst.castTag(.bit_not).?),
-        .bit_or => return zirBitwise(mod, scope, old_inst.castTag(.bit_or).?),
-        .xor => return zirBitwise(mod, scope, old_inst.castTag(.xor).?),
-        .shl => return zirShl(mod, scope, old_inst.castTag(.shl).?),
-        .shr => return zirShr(mod, scope, old_inst.castTag(.shr).?),
-        .cmp_lt => return zirCmp(mod, scope, old_inst.castTag(.cmp_lt).?, .lt),
-        .cmp_lte => return zirCmp(mod, scope, old_inst.castTag(.cmp_lte).?, .lte),
-        .cmp_eq => return zirCmp(mod, scope, old_inst.castTag(.cmp_eq).?, .eq),
-        .cmp_gte => return zirCmp(mod, scope, old_inst.castTag(.cmp_gte).?, .gte),
-        .cmp_gt => return zirCmp(mod, scope, old_inst.castTag(.cmp_gt).?, .gt),
-        .cmp_neq => return zirCmp(mod, scope, old_inst.castTag(.cmp_neq).?, .neq),
-        .condbr => return zirCondbr(mod, scope, old_inst.castTag(.condbr).?),
-        .is_null => return zirIsNull(mod, scope, old_inst.castTag(.is_null).?, false),
-        .is_non_null => return zirIsNull(mod, scope, old_inst.castTag(.is_non_null).?, true),
-        .is_null_ptr => return zirIsNullPtr(mod, scope, old_inst.castTag(.is_null_ptr).?, false),
-        .is_non_null_ptr => return zirIsNullPtr(mod, scope, old_inst.castTag(.is_non_null_ptr).?, true),
-        .is_err => return zirIsErr(mod, scope, old_inst.castTag(.is_err).?),
-        .is_err_ptr => return zirIsErrPtr(mod, scope, old_inst.castTag(.is_err_ptr).?),
-        .bool_not => return zirBoolNot(mod, scope, old_inst.castTag(.bool_not).?),
-        .typeof => return zirTypeof(mod, scope, old_inst.castTag(.typeof).?),
-        .typeof_peer => return zirTypeofPeer(mod, scope, old_inst.castTag(.typeof_peer).?),
-        .optional_type => return zirOptionalType(mod, scope, old_inst.castTag(.optional_type).?),
-        .optional_type_from_ptr_elem => return zirOptionalTypeFromPtrElem(mod, scope, old_inst.castTag(.optional_type_from_ptr_elem).?),
-        .optional_payload_safe => return zirOptionalPayload(mod, scope, old_inst.castTag(.optional_payload_safe).?, true),
-        .optional_payload_unsafe => return zirOptionalPayload(mod, scope, old_inst.castTag(.optional_payload_unsafe).?, false),
-        .optional_payload_safe_ptr => return zirOptionalPayloadPtr(mod, scope, old_inst.castTag(.optional_payload_safe_ptr).?, true),
-        .optional_payload_unsafe_ptr => return zirOptionalPayloadPtr(mod, scope, old_inst.castTag(.optional_payload_unsafe_ptr).?, false),
-        .err_union_payload_safe => return zirErrUnionPayload(mod, scope, old_inst.castTag(.err_union_payload_safe).?, true),
-        .err_union_payload_unsafe => return zirErrUnionPayload(mod, scope, old_inst.castTag(.err_union_payload_unsafe).?, false),
-        .err_union_payload_safe_ptr => return zirErrUnionPayloadPtr(mod, scope, old_inst.castTag(.err_union_payload_safe_ptr).?, true),
-        .err_union_payload_unsafe_ptr => return zirErrUnionPayloadPtr(mod, scope, old_inst.castTag(.err_union_payload_unsafe_ptr).?, false),
-        .err_union_code => return zirErrUnionCode(mod, scope, old_inst.castTag(.err_union_code).?),
-        .err_union_code_ptr => return zirErrUnionCodePtr(mod, scope, old_inst.castTag(.err_union_code_ptr).?),
-        .ensure_err_payload_void => return zirEnsureErrPayloadVoid(mod, scope, old_inst.castTag(.ensure_err_payload_void).?),
-        .array_type => return zirArrayType(mod, scope, old_inst.castTag(.array_type).?),
-        .array_type_sentinel => return zirArrayTypeSentinel(mod, scope, old_inst.castTag(.array_type_sentinel).?),
-        .enum_literal => return zirEnumLiteral(mod, scope, old_inst.castTag(.enum_literal).?),
-        .merge_error_sets => return zirMergeErrorSets(mod, scope, old_inst.castTag(.merge_error_sets).?),
-        .error_union_type => return zirErrorUnionType(mod, scope, old_inst.castTag(.error_union_type).?),
-        .anyframe_type => return zirAnyframeType(mod, scope, old_inst.castTag(.anyframe_type).?),
-        .error_set => return zirErrorSet(mod, scope, old_inst.castTag(.error_set).?),
-        .error_value => return zirErrorValue(mod, scope, old_inst.castTag(.error_value).?),
-        .slice => return zirSlice(mod, scope, old_inst.castTag(.slice).?),
-        .slice_start => return zirSliceStart(mod, scope, old_inst.castTag(.slice_start).?),
-        .import => return zirImport(mod, scope, old_inst.castTag(.import).?),
-        .bool_and => return zirBoolOp(mod, scope, old_inst.castTag(.bool_and).?),
-        .bool_or => return zirBoolOp(mod, scope, old_inst.castTag(.bool_or).?),
-        .void_value => return mod.constVoid(scope, old_inst.src),
-        .switchbr => return zirSwitchBr(mod, scope, old_inst.castTag(.switchbr).?, false),
-        .switchbr_ref => return zirSwitchBr(mod, scope, old_inst.castTag(.switchbr_ref).?, true),
-        .switch_range => return zirSwitchRange(mod, scope, old_inst.castTag(.switch_range).?),
-        .@"await" => return zirAwait(mod, scope, old_inst.castTag(.@"await").?),
-        .nosuspend_await => return zirAwait(mod, scope, old_inst.castTag(.nosuspend_await).?),
-        .@"resume" => return zirResume(mod, scope, old_inst.castTag(.@"resume").?),
-        .@"suspend" => return zirSuspend(mod, scope, old_inst.castTag(.@"suspend").?),
-        .suspend_block => return zirSuspendBlock(mod, scope, old_inst.castTag(.suspend_block).?),
-
-        .container_field_named,
-        .container_field_typed,
-        .container_field,
-        .enum_type,
-        .union_type,
-        .struct_type,
-        => return mod.fail(scope, old_inst.src, "TODO analyze container instructions", .{}),
-    }
-}
-
-pub fn analyzeBody(mod: *Module, block: *Scope.Block, body: zir.Body) !void {
-    const tracy = trace(@src());
-    defer tracy.end();
-
-    for (body.instructions) |src_inst| {
-        const analyzed_inst = try analyzeInst(mod, &block.base, src_inst);
-        try block.inst_table.putNoClobber(src_inst, analyzed_inst);
-        if (analyzed_inst.ty.zigTypeTag() == .NoReturn) {
-            break;
-        }
+// TODO when memory layout of TZIR is reworked, this can be simplified.
+const const_tzir_inst_list = blk: {
+    var result: [zir.const_inst_list.len]ir.Inst.Const = undefined;
+    for (result) |*tzir_const, i| {
+        tzir_const.* = .{
+            .base = .{
+                .tag = .constant,
+                .ty = zir.const_inst_list[i].ty,
+                .src = 0,
+            },
+            .val = zir.const_inst_list[i].val,
+        };
     }
+    break :blk result;
+};
+
+pub fn root(sema: *Sema, root_block: *Scope.Block) !void {
+    const root_body = sema.code.extra[sema.code.root_start..][0..sema.code.root_len];
+    return sema.body(root_block, root_body);
 }
 
-pub fn analyzeBodyValueAsType(
-    mod: *Module,
-    block_scope: *Scope.Block,
-    zir_result_inst: *zir.Inst,
+pub fn rootAsType(
+    sema: *Sema,
+    root_block: *Scope.Block,
+    zir_result_inst: zir.Inst.Index,
     body: zir.Body,
 ) !Type {
-    try analyzeBody(mod, block_scope, body);
-    const result_inst = block_scope.inst_table.get(zir_result_inst).?;
-    const val = try mod.resolveConstValue(&block_scope.base, result_inst);
-    return val.toType(block_scope.base.arena());
+    const root_body = sema.code.extra[sema.code.root_start..][0..sema.code.root_len];
+    try sema.body(root_block, root_body);
+
+    const result_inst = sema.inst_map[zir_result_inst];
+    // Source location is unneeded because resolveConstValue must have already
+    // been successfully called when coercing the value to a type, from the
+    // result location.
+    const val = try sema.resolveConstValue(root_block, .unneeded, result_inst);
+    return val.toType(root_block.arena);
+}
+
+pub fn body(sema: *Sema, block: *Scope.Block, body: []const zir.Inst.Index) !void {
+    const tracy = trace(@src());
+    defer tracy.end();
+
+    const map = block.sema.inst_map;
+    const tags = block.sema.code.instructions.items(.tag);
+
+    // TODO: As an optimization, look into making these switch prongs directly jump
+    // to the next one, rather than detouring through the loop condition.
+    // Also, look into leaving only the "noreturn" loop break condition, and removing
+    // the iteration based one. Better yet, have an extra entry in the tags array as a
+    // sentinel, so that exiting the loop is just another jump table prong.
+    // Related: https://github.com/ziglang/zig/issues/8220
+    for (body) |zir_inst| {
+        map[zir_inst] = switch (tags[zir_inst]) {
+            .alloc => try sema.zirAlloc(block, zir_inst),
+            .alloc_mut => try sema.zirAllocMut(block, zir_inst),
+            .alloc_inferred => try sema.zirAllocInferred(block, zir_inst, Type.initTag(.inferred_alloc_const)),
+            .alloc_inferred_mut => try sema.zirAllocInferred(block, zir_inst, Type.initTag(.inferred_alloc_mut)),
+            .bitcast_ref => try sema.zirBitcastRef(block, zir_inst),
+            .bitcast_result_ptr => try sema.zirBitcastResultPtr(block, zir_inst),
+            .block => try sema.zirBlock(block, zir_inst, false),
+            .block_comptime => try sema.zirBlock(block, zir_inst, true),
+            .block_flat => try sema.zirBlockFlat(block, zir_inst, false),
+            .block_comptime_flat => try sema.zirBlockFlat(block, zir_inst, true),
+            .@"break" => try sema.zirBreak(block, zir_inst),
+            .break_void_tok => try sema.zirBreakVoidTok(block, zir_inst),
+            .breakpoint => try sema.zirBreakpoint(block, zir_inst),
+            .call => try sema.zirCall(block, zir_inst, .auto),
+            .call_async_kw => try sema.zirCall(block, zir_inst, .async_kw),
+            .call_no_async => try sema.zirCall(block, zir_inst, .no_async),
+            .call_compile_time => try sema.zirCall(block, zir_inst, .compile_time),
+            .call_none => try sema.zirCallNone(block, zir_inst),
+            .coerce_result_ptr => try sema.zirCoerceResultPtr(block, zir_inst),
+            .compile_error => try sema.zirCompileError(block, zir_inst),
+            .compile_log => try sema.zirCompileLog(block, zir_inst),
+            .@"const" => try sema.zirConst(block, zir_inst),
+            .dbg_stmt_node => try sema.zirDbgStmtNode(block, zir_inst),
+            .decl_ref => try sema.zirDeclRef(block, zir_inst),
+            .decl_val => try sema.zirDeclVal(block, zir_inst),
+            .ensure_result_used => try sema.zirEnsureResultUsed(block, zir_inst),
+            .ensure_result_non_error => try sema.zirEnsureResultNonError(block, zir_inst),
+            .indexable_ptr_len => try sema.zirIndexablePtrLen(block, zir_inst),
+            .ref => try sema.zirRef(block, zir_inst),
+            .resolve_inferred_alloc => try sema.zirResolveInferredAlloc(block, zir_inst),
+            .ret_ptr => try sema.zirRetPtr(block, zir_inst),
+            .ret_type => try sema.zirRetType(block, zir_inst),
+            .store_to_block_ptr => try sema.zirStoreToBlockPtr(block, zir_inst),
+            .store_to_inferred_ptr => try sema.zirStoreToInferredPtr(block, zir_inst),
+            .ptr_type_simple => try sema.zirPtrTypeSimple(block, zir_inst),
+            .ptr_type => try sema.zirPtrType(block, zir_inst),
+            .store => try sema.zirStore(block, zir_inst),
+            .set_eval_branch_quota => try sema.zirSetEvalBranchQuota(block, zir_inst),
+            .str => try sema.zirStr(block, zir_inst),
+            .int => try sema.zirInt(block, zir_inst),
+            .int_type => try sema.zirIntType(block, zir_inst),
+            .loop => try sema.zirLoop(block, zir_inst),
+            .param_type => try sema.zirParamType(block, zir_inst),
+            .ptrtoint => try sema.zirPtrtoint(block, zir_inst),
+            .field_ptr => try sema.zirFieldPtr(block, zir_inst),
+            .field_val => try sema.zirFieldVal(block, zir_inst),
+            .field_ptr_named => try sema.zirFieldPtrNamed(block, zir_inst),
+            .field_val_named => try sema.zirFieldValNamed(block, zir_inst),
+            .deref => try sema.zirDeref(block, zir_inst),
+            .as => try sema.zirAs(block, zir_inst),
+            .@"asm" => try sema.zirAsm(block, zir_inst, false),
+            .asm_volatile => try sema.zirAsm(block, zir_inst, true),
+            .unreachable_safe => try sema.zirUnreachable(block, zir_inst, true),
+            .unreachable_unsafe => try sema.zirUnreachable(block, zir_inst, false),
+            .ret_tok => try sema.zirRetTok(block, zir_inst),
+            .ret_node => try sema.zirRetNode(block, zir_inst),
+            .fn_type => try sema.zirFnType(block, zir_inst),
+            .fn_type_cc => try sema.zirFnTypeCc(block, zir_inst),
+            .intcast => try sema.zirIntcast(block, zir_inst),
+            .bitcast => try sema.zirBitcast(block, zir_inst),
+            .floatcast => try sema.zirFloatcast(block, zir_inst),
+            .elem_ptr => try sema.zirElemPtr(block, zir_inst),
+            .elem_ptr_node => try sema.zirElemPtrNode(block, zir_inst),
+            .elem_val => try sema.zirElemVal(block, zir_inst),
+            .elem_val_node => try sema.zirElemValNode(block, zir_inst),
+            .add => try sema.zirArithmetic(block, zir_inst),
+            .addwrap => try sema.zirArithmetic(block, zir_inst),
+            .sub => try sema.zirArithmetic(block, zir_inst),
+            .subwrap => try sema.zirArithmetic(block, zir_inst),
+            .mul => try sema.zirArithmetic(block, zir_inst),
+            .mulwrap => try sema.zirArithmetic(block, zir_inst),
+            .div => try sema.zirArithmetic(block, zir_inst),
+            .mod_rem => try sema.zirArithmetic(block, zir_inst),
+            .array_cat => try sema.zirArrayCat(block, zir_inst),
+            .array_mul => try sema.zirArrayMul(block, zir_inst),
+            .bit_and => try sema.zirBitwise(block, zir_inst),
+            .bit_not => try sema.zirBitNot(block, zir_inst),
+            .bit_or => try sema.zirBitwise(block, zir_inst),
+            .xor => try sema.zirBitwise(block, zir_inst),
+            .shl => try sema.zirShl(block, zir_inst),
+            .shr => try sema.zirShr(block, zir_inst),
+            .cmp_lt => try sema.zirCmp(block, zir_inst, .lt),
+            .cmp_lte => try sema.zirCmp(block, zir_inst, .lte),
+            .cmp_eq => try sema.zirCmp(block, zir_inst, .eq),
+            .cmp_gte => try sema.zirCmp(block, zir_inst, .gte),
+            .cmp_gt => try sema.zirCmp(block, zir_inst, .gt),
+            .cmp_neq => try sema.zirCmp(block, zir_inst, .neq),
+            .condbr => try sema.zirCondbr(block, zir_inst),
+            .is_null => try sema.zirIsNull(block, zir_inst, false),
+            .is_non_null => try sema.zirIsNull(block, zir_inst, true),
+            .is_null_ptr => try sema.zirIsNullPtr(block, zir_inst, false),
+            .is_non_null_ptr => try sema.zirIsNullPtr(block, zir_inst, true),
+            .is_err => try sema.zirIsErr(block, zir_inst),
+            .is_err_ptr => try sema.zirIsErrPtr(block, zir_inst),
+            .bool_not => try sema.zirBoolNot(block, zir_inst),
+            .typeof => try sema.zirTypeof(block, zir_inst),
+            .typeof_peer => try sema.zirTypeofPeer(block, zir_inst),
+            .optional_type => try sema.zirOptionalType(block, zir_inst),
+            .optional_type_from_ptr_elem => try sema.zirOptionalTypeFromPtrElem(block, zir_inst),
+            .optional_payload_safe => try sema.zirOptionalPayload(block, zir_inst, true),
+            .optional_payload_unsafe => try sema.zirOptionalPayload(block, zir_inst, false),
+            .optional_payload_safe_ptr => try sema.zirOptionalPayloadPtr(block, zir_inst, true),
+            .optional_payload_unsafe_ptr => try sema.zirOptionalPayloadPtr(block, zir_inst, false),
+            .err_union_payload_safe => try sema.zirErrUnionPayload(block, zir_inst, true),
+            .err_union_payload_unsafe => try sema.zirErrUnionPayload(block, zir_inst, false),
+            .err_union_payload_safe_ptr => try sema.zirErrUnionPayloadPtr(block, zir_inst, true),
+            .err_union_payload_unsafe_ptr => try sema.zirErrUnionPayloadPtr(block, zir_inst, false),
+            .err_union_code => try sema.zirErrUnionCode(block, zir_inst),
+            .err_union_code_ptr => try sema.zirErrUnionCodePtr(block, zir_inst),
+            .ensure_err_payload_void => try sema.zirEnsureErrPayloadVoid(block, zir_inst),
+            .array_type => try sema.zirArrayType(block, zir_inst),
+            .array_type_sentinel => try sema.zirArrayTypeSentinel(block, zir_inst),
+            .enum_literal => try sema.zirEnumLiteral(block, zir_inst),
+            .merge_error_sets => try sema.zirMergeErrorSets(block, zir_inst),
+            .error_union_type => try sema.zirErrorUnionType(block, zir_inst),
+            .anyframe_type => try sema.zirAnyframeType(block, zir_inst),
+            .error_set => try sema.zirErrorSet(block, zir_inst),
+            .error_value => try sema.zirErrorValue(block, zir_inst),
+            .slice_start => try sema.zirSliceStart(block, zir_inst),
+            .slice_end => try sema.zirSliceEnd(block, zir_inst),
+            .slice_sentinel => try sema.zirSliceSentinel(block, zir_inst),
+            .import => try sema.zirImport(block, zir_inst),
+            .bool_and => try sema.zirBoolOp(block, zir_inst, false),
+            .bool_or => try sema.zirBoolOp(block, zir_inst, true),
+            .void_value => try sema.mod.constVoid(block.arena, .unneeded),
+            .switchbr => try sema.zirSwitchBr(block, zir_inst, false),
+            .switchbr_ref => try sema.zirSwitchBr(block, zir_inst, true),
+            .switch_range => try sema.zirSwitchRange(block, zir_inst),
+        };
+        if (map[zir_inst].ty.isNoReturn()) {
+            break;
+        }
+    }
 }
 
-pub fn resolveInst(mod: *Module, scope: *Scope, zir_inst: *zir.Inst) InnerError!*Inst {
-    const block = scope.cast(Scope.Block).?;
-    return block.inst_table.get(zir_inst).?; // Instruction does not dominate all uses!
+fn resolveInst(sema: *Sema, block: *Scope.Block, zir_ref: zir.Inst.Ref) *const ir.Inst {
+    var i = zir_ref;
+
+    // First section of indexes correspond to a set number of constant values.
+    if (i < const_tzir_inst_list.len) {
+        return &const_tzir_inst_list[i];
+    }
+    i -= const_tzir_inst_list.len;
+
+    // Next section of indexes correspond to function parameters, if any.
+    if (block.inlining) |inlining| {
+        if (i < inlining.casted_args.len) {
+            return inlining.casted_args[i];
+        }
+        i -= inlining.casted_args.len;
+    } else {
+        if (i < sema.param_inst_list.len) {
+            return sema.param_inst_list[i];
+        }
+        i -= sema.param_inst_list.len;
+    }
+
+    // Finally, the last section of indexes refers to the map of ZIR=>TZIR.
+    return sema.inst_map[i];
 }
 
-fn resolveConstString(mod: *Module, scope: *Scope, old_inst: *zir.Inst) ![]u8 {
-    const new_inst = try resolveInst(mod, scope, old_inst);
+fn resolveConstString(
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    zir_ref: zir.Inst.Ref,
+) ![]u8 {
+    const tzir_inst = sema.resolveInst(block, zir_ref);
     const wanted_type = Type.initTag(.const_slice_u8);
-    const coerced_inst = try mod.coerce(scope, wanted_type, new_inst);
-    const val = try mod.resolveConstValue(scope, coerced_inst);
-    return val.toAllocatedBytes(scope.arena());
+    const coerced_inst = try sema.coerce(block, wanted_type, tzir_inst);
+    const val = try sema.resolveConstValue(block, src, coerced_inst);
+    return val.toAllocatedBytes(block.arena);
 }
 
-fn resolveType(mod: *Module, scope: *Scope, old_inst: *zir.Inst) !Type {
-    const new_inst = try resolveInst(mod, scope, old_inst);
+fn resolveType(sema: *Sema, block: *Scope.Block, src: LazySrcLoc, zir_ref: zir.Inst.Ref) !Type {
+    const tzir_inst = sema.resolveInt(block, zir_ref);
     const wanted_type = Type.initTag(.@"type");
-    const coerced_inst = try mod.coerce(scope, wanted_type, new_inst);
-    const val = try mod.resolveConstValue(scope, coerced_inst);
-    return val.toType(scope.arena());
+    const coerced_inst = try sema.coerce(block, wanted_type, tzir_inst);
+    const val = try sema.resolveConstValue(block, src, coerced_inst);
+    return val.toType(sema.arena);
+}
+
+fn resolveConstValue(sema: *Sema, block: *Scope.Block, src: LazySrcLoc, base: *ir.Inst) !Value {
+    return (try sema.resolveDefinedValue(block, src, base)) orelse
+        return sema.mod.fail(&block.base, src, "unable to resolve comptime value", .{});
+}
+
+fn resolveDefinedValue(sema: *Sema, block: *Scope.Block, src: LazySrcLoc, base: *ir.Inst) !?Value {
+    if (base.value()) |val| {
+        if (val.isUndef()) {
+            return sema.mod.fail(&block.base, src, "use of undefined value here causes undefined behavior", .{});
+        }
+        return val;
+    }
+    return null;
 }
 
 /// Appropriate to call when the coercion has already been done by result
 /// location semantics. Asserts the value fits in the provided `Int` type.
 /// Only supports `Int` types 64 bits or less.
 fn resolveAlreadyCoercedInt(
-    mod: *Module,
-    scope: *Scope,
-    old_inst: *zir.Inst,
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    zir_ref: zir.Inst.Ref,
     comptime Int: type,
 ) !Int {
     comptime assert(@typeInfo(Int).Int.bits <= 64);
-    const new_inst = try resolveInst(mod, scope, old_inst);
-    const val = try mod.resolveConstValue(scope, new_inst);
+    const tzir_inst = sema.resolveInst(block, zir_ref);
+    const val = try sema.resolveConstValue(block, src, tzir_inst);
     switch (@typeInfo(Int).Int.signedness) {
         .signed => return @intCast(Int, val.toSignedInt()),
         .unsigned => return @intCast(Int, val.toUnsignedInt()),
     }
 }
 
-fn resolveInt(mod: *Module, scope: *Scope, old_inst: *zir.Inst, dest_type: Type) !u64 {
-    const new_inst = try resolveInst(mod, scope, old_inst);
-    const coerced = try mod.coerce(scope, dest_type, new_inst);
-    const val = try mod.resolveConstValue(scope, coerced);
+fn resolveInt(
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    zir_ref: zir.Inst.Ref,
+    dest_type: Type,
+) !u64 {
+    const tzir_inst = sema.resolveInst(block, zir_ref);
+    const coerced = try sema.coerce(scope, dest_type, tzir_inst);
+    const val = try sema.resolveConstValue(block, src, coerced);
 
     return val.toUnsignedInt();
 }
 
-pub fn resolveInstConst(mod: *Module, scope: *Scope, old_inst: *zir.Inst) InnerError!TypedValue {
-    const new_inst = try resolveInst(mod, scope, old_inst);
-    const val = try mod.resolveConstValue(scope, new_inst);
+fn resolveInstConst(
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    zir_ref: zir.Inst.Ref,
+) InnerError!TypedValue {
+    const tzir_inst = sema.resolveInst(block, zir_ref);
+    const val = try sema.resolveConstValue(block, src, tzir_inst);
     return TypedValue{
-        .ty = new_inst.ty,
+        .ty = tzir_inst.ty,
         .val = val,
     };
 }
 
-fn zirConst(mod: *Module, scope: *Scope, const_inst: *zir.Inst.Const) InnerError!*Inst {
+fn zirConst(sema: *Sema, block: *Scope.Block, const_inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
     // Move the TypedValue from old memory to new memory. This allows freeing the ZIR instructions
     // after analysis.
-    const typed_value_copy = try const_inst.positionals.typed_value.copy(scope.arena());
-    return mod.constInst(scope, const_inst.base.src, typed_value_copy);
+    const typed_value_copy = try const_inst.positionals.typed_value.copy(block.arena);
+    return sema.mod.constInst(scope, const_inst.base.src, typed_value_copy);
 }
 
-fn analyzeConstInst(mod: *Module, scope: *Scope, old_inst: *zir.Inst) InnerError!TypedValue {
-    const new_inst = try analyzeInst(mod, scope, old_inst);
-    return TypedValue{
-        .ty = new_inst.ty,
-        .val = try mod.resolveConstValue(scope, new_inst),
-    };
-}
-
-fn zirBitcastRef(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirBitcastRef(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.fail(scope, inst.base.src, "TODO implement zir_sema.zirBitcastRef", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement zir_sema.zirBitcastRef", .{});
 }
 
-fn zirBitcastResultPtr(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirBitcastResultPtr(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.fail(scope, inst.base.src, "TODO implement zir_sema.zirBitcastResultPtr", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement zir_sema.zirBitcastResultPtr", .{});
 }
 
-fn zirCoerceResultPtr(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirCoerceResultPtr(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.fail(scope, inst.base.src, "TODO implement zirCoerceResultPtr", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement zirCoerceResultPtr", .{});
 }
 
-fn zirRetPtr(mod: *Module, scope: *Scope, inst: *zir.Inst.NoOp) InnerError!*Inst {
+fn zirRetPtr(sema: *Module, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const b = try mod.requireFunctionBlock(scope, inst.base.src);
-    const fn_ty = b.func.?.owner_decl.typed_value.most_recent.typed_value.ty;
+
+    try sema.requireFunctionBlock(block, inst.base.src);
+    const fn_ty = block.func.?.owner_decl.typed_value.most_recent.typed_value.ty;
     const ret_type = fn_ty.fnReturnType();
-    const ptr_type = try mod.simplePtrType(scope, inst.base.src, ret_type, true, .One);
-    return mod.addNoOp(b, inst.base.src, ptr_type, .alloc);
+    const ptr_type = try sema.mod.simplePtrType(block.arena, ret_type, true, .One);
+    return block.addNoOp(inst.base.src, ptr_type, .alloc);
 }
 
-fn zirRef(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirRef(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const operand = try resolveInst(mod, scope, inst.positionals.operand);
-    return mod.analyzeRef(scope, inst.base.src, operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const operand = sema.resolveInst(block, inst_data.operand);
+    return sema.analyzeRef(block, inst_data.src(), operand);
 }
 
-fn zirRetType(mod: *Module, scope: *Scope, inst: *zir.Inst.NoOp) InnerError!*Inst {
+fn zirRetType(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const b = try mod.requireFunctionBlock(scope, inst.base.src);
+    try sema.requireFunctionBlock(block, inst.base.src);
     const fn_ty = b.func.?.owner_decl.typed_value.most_recent.typed_value.ty;
     const ret_type = fn_ty.fnReturnType();
-    return mod.constType(scope, inst.base.src, ret_type);
+    return sema.mod.constType(block.arena, inst.base.src, ret_type);
 }
 
-fn zirEnsureResultUsed(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirEnsureResultUsed(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const operand = try resolveInst(mod, scope, inst.positionals.operand);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const operand = sema.resolveInst(block, inst_data.operand);
+    const src = inst_data.src();
     switch (operand.ty.zigTypeTag()) {
-        .Void, .NoReturn => return mod.constVoid(scope, operand.src),
-        else => return mod.fail(scope, operand.src, "expression value is ignored", .{}),
+        .Void, .NoReturn => return sema.mod.constVoid(block.arena, .unneeded),
+        else => return sema.mod.fail(&block.base, src, "expression value is ignored", .{}),
     }
 }
 
-fn zirEnsureResultNonError(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirEnsureResultNonError(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const operand = try resolveInst(mod, scope, inst.positionals.operand);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const operand = sema.resolveInst(block, inst_data.operand);
+    const src = inst_data.src();
     switch (operand.ty.zigTypeTag()) {
-        .ErrorSet, .ErrorUnion => return mod.fail(scope, operand.src, "error is discarded", .{}),
-        else => return mod.constVoid(scope, operand.src),
+        .ErrorSet, .ErrorUnion => return sema.mod.fail(&block.base, src, "error is discarded", .{}),
+        else => return sema.mod.constVoid(block.arena, .unneeded),
     }
 }
 
-fn zirIndexablePtrLen(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirIndexablePtrLen(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const array_ptr = try resolveInst(mod, scope, inst.positionals.operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const array_ptr = sema.resolveInst(block, inst_data.operand);
+
     const elem_ty = array_ptr.ty.elemType();
     if (!elem_ty.isIndexable()) {
+        const cond_src: LazySrcLoc = .{ .node_offset_for_cond = inst_data.src_node };
         const msg = msg: {
-            const msg = try mod.errMsg(
-                scope,
-                inst.base.src,
+            const msg = try sema.mod.errMsg(
+                &block.base,
+                cond_src,
                 "type '{}' does not support indexing",
                 .{elem_ty},
             );
             errdefer msg.destroy(mod.gpa);
-            try mod.errNote(
-                scope,
-                inst.base.src,
+            try sema.mod.errNote(
+                &block.base,
+                cond_src,
                 msg,
                 "for loop operand must be an array, slice, tuple, or vector",
                 .{},
@@ -367,38 +461,46 @@ fn zirIndexablePtrLen(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerEr
         };
         return mod.failWithOwnedErrorMsg(scope, msg);
     }
-    const result_ptr = try mod.namedFieldPtr(scope, inst.base.src, array_ptr, "len", inst.base.src);
-    return mod.analyzeDeref(scope, inst.base.src, result_ptr, result_ptr.src);
+    const result_ptr = try sema.namedFieldPtr(block, inst.base.src, array_ptr, "len", inst.base.src);
+    return sema.analyzeDeref(block, inst.base.src, result_ptr, result_ptr.src);
 }
 
-fn zirAlloc(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirAlloc(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const var_type = try resolveType(mod, scope, inst.positionals.operand);
-    const ptr_type = try mod.simplePtrType(scope, inst.base.src, var_type, true, .One);
-    const b = try mod.requireRuntimeBlock(scope, inst.base.src);
-    return mod.addNoOp(b, inst.base.src, ptr_type, .alloc);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const ty_src: LazySrcLoc = .{ .node_offset_var_decl_ty = inst_data.src_node };
+    const var_decl_src = inst_data.src();
+    const var_type = try sema.resolveType(block, ty_src, inst_data.operand);
+    const ptr_type = try sema.mod.simplePtrType(block.arena, var_type, true, .One);
+    try sema.requireRuntimeBlock(block, var_decl_src);
+    return block.addNoOp(var_decl_src, ptr_type, .alloc);
 }
 
-fn zirAllocMut(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirAllocMut(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const var_type = try resolveType(mod, scope, inst.positionals.operand);
-    try mod.validateVarType(scope, inst.base.src, var_type);
-    const ptr_type = try mod.simplePtrType(scope, inst.base.src, var_type, true, .One);
-    const b = try mod.requireRuntimeBlock(scope, inst.base.src);
-    return mod.addNoOp(b, inst.base.src, ptr_type, .alloc);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const var_decl_src = inst_data.src();
+    const ty_src: LazySrcLoc = .{ .node_offset_var_decl_ty = inst_data.src_node };
+    const var_type = try sema.resolveType(block, ty_src, inst_data.operand);
+    try sema.validateVarType(block, ty_src, var_type);
+    const ptr_type = try sema.mod.simplePtrType(block.arena, var_type, true, .One);
+    try sema.requireRuntimeBlock(block, var_decl_src);
+    return block.addNoOp(var_decl_src, ptr_type, .alloc);
 }
 
 fn zirAllocInferred(
-    mod: *Module,
-    scope: *Scope,
-    inst: *zir.Inst.NoOp,
-    mut_tag: Type.Tag,
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
+    inferred_alloc_ty: Type,
 ) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const val_payload = try scope.arena().create(Value.Payload.InferredAlloc);
+    const val_payload = try block.arena.create(Value.Payload.InferredAlloc);
     val_payload.* = .{
         .data = .{},
     };
@@ -406,193 +508,197 @@ fn zirAllocInferred(
     // not needed in the case of constant values. However here, we plan to "downgrade"
     // to a normal instruction when we hit `resolve_inferred_alloc`. So we append
     // to the block even though it is currently a `.constant`.
-    const result = try mod.constInst(scope, inst.base.src, .{
-        .ty = switch (mut_tag) {
-            .inferred_alloc_const => Type.initTag(.inferred_alloc_const),
-            .inferred_alloc_mut => Type.initTag(.inferred_alloc_mut),
-            else => unreachable,
-        },
+    const result = try sema.mod.constInst(scope, inst.base.src, .{
+        .ty = inferred_alloc_ty,
         .val = Value.initPayload(&val_payload.base),
     });
-    const block = try mod.requireFunctionBlock(scope, inst.base.src);
-    try block.instructions.append(mod.gpa, result);
+    try sema.requireFunctionBlock(block, inst.base.src);
+    try block.instructions.append(sema.gpa, result);
     return result;
 }
 
 fn zirResolveInferredAlloc(
-    mod: *Module,
-    scope: *Scope,
-    inst: *zir.Inst.UnOp,
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
 ) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const ptr = try resolveInst(mod, scope, inst.positionals.operand);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const ty_src: LazySrcLoc = .{ .node_offset_var_decl_ty = inst_data.src_node };
+    const ptr = sema.resolveInst(block, inst_data.operand);
     const ptr_val = ptr.castTag(.constant).?.val;
     const inferred_alloc = ptr_val.castTag(.inferred_alloc).?;
     const peer_inst_list = inferred_alloc.data.stored_inst_list.items;
-    const final_elem_ty = try mod.resolvePeerTypes(scope, peer_inst_list);
+    const final_elem_ty = try sema.resolvePeerTypes(block, peer_inst_list);
     const var_is_mut = switch (ptr.ty.tag()) {
         .inferred_alloc_const => false,
         .inferred_alloc_mut => true,
         else => unreachable,
     };
     if (var_is_mut) {
-        try mod.validateVarType(scope, inst.base.src, final_elem_ty);
+        try sema.validateVarType(block, ty_src, final_elem_ty);
     }
-    const final_ptr_ty = try mod.simplePtrType(scope, inst.base.src, final_elem_ty, true, .One);
+    const final_ptr_ty = try sema.mod.simplePtrType(block.arena, final_elem_ty, true, .One);
 
     // Change it to a normal alloc.
     ptr.ty = final_ptr_ty;
     ptr.tag = .alloc;
 
-    return mod.constVoid(scope, inst.base.src);
+    return sema.mod.constVoid(block.arena, .unneeded);
 }
 
 fn zirStoreToBlockPtr(
-    mod: *Module,
-    scope: *Scope,
-    inst: *zir.Inst.BinOp,
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
 ) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const ptr = try resolveInst(mod, scope, inst.positionals.lhs);
-    const value = try resolveInst(mod, scope, inst.positionals.rhs);
-    const ptr_ty = try mod.simplePtrType(scope, inst.base.src, value.ty, true, .One);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const ptr = sema.resolveInst(bin_inst.lhs);
+    const value = sema.resolveInst(bin_inst.rhs);
+    const ptr_ty = try sema.mod.simplePtrType(block.arena, value.ty, true, .One);
     // TODO detect when this store should be done at compile-time. For example,
     // if expressions should force it when the condition is compile-time known.
-    const b = try mod.requireRuntimeBlock(scope, inst.base.src);
-    const bitcasted_ptr = try mod.addUnOp(b, inst.base.src, ptr_ty, .bitcast, ptr);
+    try sema.requireRuntimeBlock(block, src);
+    const bitcasted_ptr = try block.addUnOp(inst.base.src, ptr_ty, .bitcast, ptr);
     return mod.storePtr(scope, inst.base.src, bitcasted_ptr, value);
 }
 
 fn zirStoreToInferredPtr(
-    mod: *Module,
-    scope: *Scope,
-    inst: *zir.Inst.BinOp,
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
 ) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const ptr = try resolveInst(mod, scope, inst.positionals.lhs);
-    const value = try resolveInst(mod, scope, inst.positionals.rhs);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const ptr = sema.resolveInst(bin_inst.lhs);
+    const value = sema.resolveInst(bin_inst.rhs);
     const inferred_alloc = ptr.castTag(.constant).?.val.castTag(.inferred_alloc).?;
     // Add the stored instruction to the set we will use to resolve peer types
     // for the inferred allocation.
-    try inferred_alloc.data.stored_inst_list.append(scope.arena(), value);
+    try inferred_alloc.data.stored_inst_list.append(block.arena, value);
     // Create a runtime bitcast instruction with exactly the type the pointer wants.
-    const ptr_ty = try mod.simplePtrType(scope, inst.base.src, value.ty, true, .One);
-    const b = try mod.requireRuntimeBlock(scope, inst.base.src);
-    const bitcasted_ptr = try mod.addUnOp(b, inst.base.src, ptr_ty, .bitcast, ptr);
+    const ptr_ty = try sema.mod.simplePtrType(block.arena, value.ty, true, .One);
+    try sema.requireRuntimeBlock(block, src);
+    const bitcasted_ptr = try block.addUnOp(inst.base.src, ptr_ty, .bitcast, ptr);
     return mod.storePtr(scope, inst.base.src, bitcasted_ptr, value);
 }
 
 fn zirSetEvalBranchQuota(
-    mod: *Module,
-    scope: *Scope,
-    inst: *zir.Inst.UnOp,
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
 ) InnerError!*Inst {
-    const b = try mod.requireFunctionBlock(scope, inst.base.src);
-    const quota = try resolveAlreadyCoercedInt(mod, scope, inst.positionals.operand, u32);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const src = inst_data.src();
+    try sema.requireFunctionBlock(block, src);
+    const quota = try sema.resolveAlreadyCoercedInt(block, src, inst_data.operand, u32);
     if (b.branch_quota.* < quota)
         b.branch_quota.* = quota;
-    return mod.constVoid(scope, inst.base.src);
+    return sema.mod.constVoid(block.arena, .unneeded);
 }
 
-fn zirStore(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirStore(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const ptr = try resolveInst(mod, scope, inst.positionals.lhs);
-    const value = try resolveInst(mod, scope, inst.positionals.rhs);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const ptr = sema.resolveInst(bin_inst.lhs);
+    const value = sema.resolveInst(bin_inst.rhs);
     return mod.storePtr(scope, inst.base.src, ptr, value);
 }
 
-fn zirParamType(mod: *Module, scope: *Scope, inst: *zir.Inst.ParamType) InnerError!*Inst {
+fn zirParamType(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const fn_inst = try resolveInst(mod, scope, inst.positionals.func);
-    const arg_index = inst.positionals.arg_index;
+
+    const inst_data = sema.code.instructions.items(.data)[inst].param_type;
+    const fn_inst = sema.resolveInst(inst_data.callee);
+    const param_index = inst_data.param_index;
 
     const fn_ty: Type = switch (fn_inst.ty.zigTypeTag()) {
         .Fn => fn_inst.ty,
         .BoundFn => {
-            return mod.fail(scope, fn_inst.src, "TODO implement zirParamType for method call syntax", .{});
+            return sema.mod.fail(&block.base, fn_inst.src, "TODO implement zirParamType for method call syntax", .{});
         },
         else => {
-            return mod.fail(scope, fn_inst.src, "expected function, found '{}'", .{fn_inst.ty});
+            return sema.mod.fail(&block.base, fn_inst.src, "expected function, found '{}'", .{fn_inst.ty});
         },
     };
 
     const param_count = fn_ty.fnParamLen();
-    if (arg_index >= param_count) {
+    if (param_index >= param_count) {
         if (fn_ty.fnIsVarArgs()) {
-            return mod.constType(scope, inst.base.src, Type.initTag(.var_args_param));
+            return sema.mod.constType(block.arena, inst.base.src, Type.initTag(.var_args_param));
         }
-        return mod.fail(scope, inst.base.src, "arg index {d} out of bounds; '{}' has {d} argument(s)", .{
-            arg_index,
+        return sema.mod.fail(&block.base, inst.base.src, "arg index {d} out of bounds; '{}' has {d} argument(s)", .{
+            param_index,
             fn_ty,
             param_count,
         });
     }
 
     // TODO support generic functions
-    const param_type = fn_ty.fnParamType(arg_index);
-    return mod.constType(scope, inst.base.src, param_type);
+    const param_type = fn_ty.fnParamType(param_index);
+    return sema.mod.constType(block.arena, inst.base.src, param_type);
 }
 
-fn zirStr(mod: *Module, scope: *Scope, str_inst: *zir.Inst.Str) InnerError!*Inst {
+fn zirStr(sema: *Sema, block: *Scope.Block, str_inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    // The bytes references memory inside the ZIR module, which can get deallocated
-    // after semantic analysis is complete. We need the memory to be in the new anonymous Decl's arena.
-    var new_decl_arena = std.heap.ArenaAllocator.init(mod.gpa);
+
+    // The bytes references memory inside the ZIR module, which is fine. Multiple
+    // anonymous Decls may have strings which point to within the same ZIR module.
+    const bytes = sema.code.instructions.items(.data)[inst].str.get(sema.code);
+
+    var new_decl_arena = std.heap.ArenaAllocator.init(sema.gpa);
     errdefer new_decl_arena.deinit();
-    const arena_bytes = try new_decl_arena.allocator.dupe(u8, str_inst.positionals.bytes);
 
-    const decl_ty = try Type.Tag.array_u8_sentinel_0.create(&new_decl_arena.allocator, arena_bytes.len);
-    const decl_val = try Value.Tag.bytes.create(&new_decl_arena.allocator, arena_bytes);
+    const decl_ty = try Type.Tag.array_u8_sentinel_0.create(&new_decl_arena.allocator, bytes.len);
+    const decl_val = try Value.Tag.bytes.create(&new_decl_arena.allocator, bytes);
 
-    const new_decl = try mod.createAnonymousDecl(scope, &new_decl_arena, .{
+    const new_decl = try sema.mod.createAnonymousDecl(&block.base, &new_decl_arena, .{
         .ty = decl_ty,
         .val = decl_val,
     });
-    return mod.analyzeDeclRef(scope, str_inst.base.src, new_decl);
+    return sema.analyzeDeclRef(block, .unneeded, new_decl);
 }
 
-fn zirInt(mod: *Module, scope: *Scope, inst: *zir.Inst.Int) InnerError!*Inst {
+fn zirInt(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
     return mod.constIntBig(scope, inst.base.src, Type.initTag(.comptime_int), inst.positionals.int);
 }
 
-fn zirExport(mod: *Module, scope: *Scope, export_inst: *zir.Inst.Export) InnerError!*Inst {
+fn zirCompileError(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const symbol_name = try resolveConstString(mod, scope, export_inst.positionals.symbol_name);
-    const exported_decl = mod.lookupDeclName(scope, export_inst.positionals.decl_name) orelse
-        return mod.fail(scope, export_inst.base.src, "decl '{s}' not found", .{export_inst.positionals.decl_name});
-    try mod.analyzeExport(scope, export_inst.base.src, symbol_name, exported_decl);
-    return mod.constVoid(scope, export_inst.base.src);
-}
 
-fn zirCompileError(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
-    const tracy = trace(@src());
-    defer tracy.end();
-    const msg = try resolveConstString(mod, scope, inst.positionals.operand);
-    return mod.fail(scope, inst.base.src, "{s}", .{msg});
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const src = inst_data.src();
+    const operand_src: LazySrcLoc = .{ .node_offset_builtin_call_arg0 = inst_data.src_node };
+    const msg = try sema.resolveConstString(block, operand_src, inst_data.operand);
+    return sema.mod.fail(&block.base, src, "{s}", .{msg});
 }
 
-fn zirCompileLog(mod: *Module, scope: *Scope, inst: *zir.Inst.CompileLog) InnerError!*Inst {
+fn zirCompileLog(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     var managed = mod.compile_log_text.toManaged(mod.gpa);
     defer mod.compile_log_text = managed.moveToUnmanaged();
     const writer = managed.writer();
 
-    for (inst.positionals.to_log) |arg_inst, i| {
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const extra = sema.code.extraData(zir.Inst.MultiOp, inst_data.payload_index);
+    for (sema.code.extra[extra.end..][0..extra.data.operands_len]) |arg_ref, i| {
         if (i != 0) try writer.print(", ", .{});
 
-        const arg = try resolveInst(mod, scope, arg_inst);
+        const arg = sema.resolveInst(block, arg_ref);
         if (arg.value()) |val| {
             try writer.print("@as({}, {})", .{ arg.ty, val });
         } else {
@@ -604,40 +710,16 @@ fn zirCompileLog(mod: *Module, scope: *Scope, inst: *zir.Inst.CompileLog) InnerE
     const gop = try mod.compile_log_decls.getOrPut(mod.gpa, scope.ownerDecl().?);
     if (!gop.found_existing) {
         gop.entry.value = .{
-            .file_scope = scope.getFileScope(),
-            .byte_offset = inst.base.src,
+            .file_scope = block.getFileScope(),
+            .lazy = inst_data.src(),
         };
     }
-    return mod.constVoid(scope, inst.base.src);
+    return sema.mod.constVoid(block.arena, .unneeded);
 }
 
-fn zirArg(mod: *Module, scope: *Scope, inst: *zir.Inst.Arg) InnerError!*Inst {
+fn zirLoop(sema: *Sema, parent_block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const b = try mod.requireFunctionBlock(scope, inst.base.src);
-    if (b.inlining) |inlining| {
-        const param_index = inlining.param_index;
-        inlining.param_index += 1;
-        return inlining.casted_args[param_index];
-    }
-    const fn_ty = b.func.?.owner_decl.typed_value.most_recent.typed_value.ty;
-    const param_index = b.instructions.items.len;
-    const param_count = fn_ty.fnParamLen();
-    if (param_index >= param_count) {
-        return mod.fail(scope, inst.base.src, "parameter index {d} outside list of length {d}", .{
-            param_index,
-            param_count,
-        });
-    }
-    const param_type = fn_ty.fnParamType(param_index);
-    const name = try scope.arena().dupeZ(u8, inst.positionals.name);
-    return mod.addArg(b, inst.base.src, param_type, name);
-}
-
-fn zirLoop(mod: *Module, scope: *Scope, inst: *zir.Inst.Loop) InnerError!*Inst {
-    const tracy = trace(@src());
-    defer tracy.end();
-    const parent_block = scope.cast(Scope.Block).?;
 
     // Reserve space for a Loop instruction so that generated Break instructions can
     // point to it, even if it doesn't end up getting used because the code ends up being
@@ -666,7 +748,7 @@ fn zirLoop(mod: *Module, scope: *Scope, inst: *zir.Inst.Loop) InnerError!*Inst {
     };
     defer child_block.instructions.deinit(mod.gpa);
 
-    try analyzeBody(mod, &child_block, inst.positionals.body);
+    try sema.body(&child_block, inst.positionals.body);
 
     // Loop repetition is implied so the last instruction may or may not be a noreturn instruction.
 
@@ -675,16 +757,15 @@ fn zirLoop(mod: *Module, scope: *Scope, inst: *zir.Inst.Loop) InnerError!*Inst {
     return &loop_inst.base;
 }
 
-fn zirBlockFlat(mod: *Module, scope: *Scope, inst: *zir.Inst.Block, is_comptime: bool) InnerError!*Inst {
+fn zirBlockFlat(sema: *Sema, parent_block: *Scope.Block, inst: zir.Inst.Index, is_comptime: bool) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const parent_block = scope.cast(Scope.Block).?;
 
     var child_block = parent_block.makeSubBlock();
     defer child_block.instructions.deinit(mod.gpa);
     child_block.is_comptime = child_block.is_comptime or is_comptime;
 
-    try analyzeBody(mod, &child_block, inst.positionals.body);
+    try sema.body(&child_block, inst.positionals.body);
 
     // Move the analyzed instructions into the parent block arena.
     const copied_instructions = try parent_block.arena.dupe(*Inst, child_block.instructions.items);
@@ -693,20 +774,18 @@ fn zirBlockFlat(mod: *Module, scope: *Scope, inst: *zir.Inst.Block, is_comptime:
     // The result of a flat block is the last instruction.
     const zir_inst_list = inst.positionals.body.instructions;
     const last_zir_inst = zir_inst_list[zir_inst_list.len - 1];
-    return resolveInst(mod, scope, last_zir_inst);
+    return sema.inst_map[last_zir_inst];
 }
 
 fn zirBlock(
-    mod: *Module,
-    scope: *Scope,
-    inst: *zir.Inst.Block,
+    sema: *Sema,
+    parent_block: *Scope.Block,
+    inst: zir.Inst.Index,
     is_comptime: bool,
 ) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const parent_block = scope.cast(Scope.Block).?;
-
     // Reserve space for a Block instruction so that generated Break instructions can
     // point to it, even if it doesn't end up getting used because the code ends up being
     // comptime evaluated.
@@ -747,22 +826,20 @@ fn zirBlock(
     defer merges.results.deinit(mod.gpa);
     defer merges.br_list.deinit(mod.gpa);
 
-    try analyzeBody(mod, &child_block, inst.positionals.body);
+    try sema.body(&child_block, inst.positionals.body);
 
     return analyzeBlockBody(mod, scope, &child_block, merges);
 }
 
 fn analyzeBlockBody(
-    mod: *Module,
-    scope: *Scope,
+    sema: *Sema,
+    parent_block: *Scope.Block,
     child_block: *Scope.Block,
     merges: *Scope.Block.Merges,
 ) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const parent_block = scope.cast(Scope.Block).?;
-
     // Blocks must terminate with noreturn instruction.
     assert(child_block.instructions.items.len != 0);
     assert(child_block.instructions.items[child_block.instructions.items.len - 1].ty.isNoReturn());
@@ -793,7 +870,7 @@ fn analyzeBlockBody(
     // Need to set the type and emit the Block instruction. This allows machine code generation
     // to emit a jump instruction to after the block when it encounters the break.
     try parent_block.instructions.append(mod.gpa, &merges.block_inst.base);
-    const resolved_ty = try mod.resolvePeerTypes(scope, merges.results.items);
+    const resolved_ty = try sema.resolvePeerTypes(parent_block, merges.results.items);
     merges.block_inst.base.ty = resolved_ty;
     merges.block_inst.body = .{
         .instructions = try parent_block.arena.dupe(*Inst, child_block.instructions.items),
@@ -807,7 +884,7 @@ fn analyzeBlockBody(
         }
         var coerce_block = parent_block.makeSubBlock();
         defer coerce_block.instructions.deinit(mod.gpa);
-        const coerced_operand = try mod.coerce(&coerce_block.base, resolved_ty, br.operand);
+        const coerced_operand = try sema.coerce(&coerce_block.base, resolved_ty, br.operand);
         // If no instructions were produced, such as in the case of a coercion of a
         // constant value to a new type, we can simply point the br operand to it.
         if (coerce_block.instructions.items.len == 0) {
@@ -835,43 +912,46 @@ fn analyzeBlockBody(
     return &merges.block_inst.base;
 }
 
-fn zirBreakpoint(mod: *Module, scope: *Scope, inst: *zir.Inst.NoOp) InnerError!*Inst {
+fn zirBreakpoint(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const b = try mod.requireRuntimeBlock(scope, inst.base.src);
-    return mod.addNoOp(b, inst.base.src, Type.initTag(.void), .breakpoint);
+
+    try sema.requireRuntimeBlock(block, src);
+    return block.addNoOp(inst.base.src, Type.initTag(.void), .breakpoint);
 }
 
-fn zirBreak(mod: *Module, scope: *Scope, inst: *zir.Inst.Break) InnerError!*Inst {
+fn zirBreak(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const operand = try resolveInst(mod, scope, inst.positionals.operand);
-    const block = inst.positionals.block;
-    return analyzeBreak(mod, scope, inst.base.src, block, operand);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const operand = sema.resolveInst(block, bin_inst.rhs);
+    const zir_block = bin_inst.lhs;
+    return analyzeBreak(mod, block, sema.src, zir_block, operand);
 }
 
-fn zirBreakVoid(mod: *Module, scope: *Scope, inst: *zir.Inst.BreakVoid) InnerError!*Inst {
+fn zirBreakVoidTok(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const block = inst.positionals.block;
-    const void_inst = try mod.constVoid(scope, inst.base.src);
-    return analyzeBreak(mod, scope, inst.base.src, block, void_inst);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const zir_block = inst_data.operand;
+    const void_inst = try sema.mod.constVoid(block.arena, .unneeded);
+    return analyzeBreak(mod, block, inst_data.src(), zir_block, void_inst);
 }
 
 fn analyzeBreak(
-    mod: *Module,
-    scope: *Scope,
-    src: usize,
-    zir_block: *zir.Inst.Block,
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    zir_block: zir.Inst.Index,
     operand: *Inst,
 ) InnerError!*Inst {
     var opt_block = scope.cast(Scope.Block);
     while (opt_block) |block| {
         if (block.label) |*label| {
             if (label.zir_block == zir_block) {
-                const b = try mod.requireFunctionBlock(scope, src);
+                try sema.requireFunctionBlock(block, src);
                 // Here we add a br instruction, but we over-allocate a little bit
                 // (if necessary) to make it possible to convert the instruction into
                 // a br_block_flat instruction later.
@@ -899,102 +979,134 @@ fn analyzeBreak(
     } else unreachable;
 }
 
-fn zirDbgStmt(mod: *Module, scope: *Scope, inst: *zir.Inst.NoOp) InnerError!*Inst {
+fn zirDbgStmtNode(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    if (scope.cast(Scope.Block)) |b| {
-        if (!b.is_comptime) {
-            return mod.addNoOp(b, inst.base.src, Type.initTag(.void), .dbg_stmt);
-        }
+
+    if (b.is_comptime) {
+        return sema.mod.constVoid(block.arena, .unneeded);
     }
-    return mod.constVoid(scope, inst.base.src);
+
+    const src_node = sema.code.instructions.items(.data)[inst].node;
+    const src: LazySrcLoc = .{ .node_offset = src_node };
+    return block.addNoOp(src, Type.initTag(.void), .dbg_stmt);
 }
 
-fn zirDeclRefStr(mod: *Module, scope: *Scope, inst: *zir.Inst.DeclRefStr) InnerError!*Inst {
+fn zirDeclRef(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const decl_name = try resolveConstString(mod, scope, inst.positionals.name);
-    return mod.analyzeDeclRefByName(scope, inst.base.src, decl_name);
+
+    const decl = sema.code.instructions.items(.data)[inst].decl;
+    return sema.analyzeDeclRef(block, .unneeded, decl);
 }
 
-fn zirDeclRef(mod: *Module, scope: *Scope, inst: *zir.Inst.DeclRef) InnerError!*Inst {
+fn zirDeclVal(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.analyzeDeclRef(scope, inst.base.src, inst.positionals.decl);
+
+    const decl = sema.code.instructions.items(.data)[inst].decl;
+    return sema.analyzeDeclVal(block, .unneeded, decl);
 }
 
-fn zirDeclVal(mod: *Module, scope: *Scope, inst: *zir.Inst.DeclVal) InnerError!*Inst {
+fn zirCallNone(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.analyzeDeclVal(scope, inst.base.src, inst.positionals.decl);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const func_src: LazySrcLoc = .{ .node_offset_call_func = inst_data.src_node };
+
+    return sema.analyzeCall(block, inst_data.operand, func_src, inst_data.src(), .auto, &.{});
 }
 
-fn zirCall(mod: *Module, scope: *Scope, inst: *zir.Inst.Call) InnerError!*Inst {
+fn zirCall(
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
+    modifier: std.builtin.CallOptions.Modifier,
+) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const func = try resolveInst(mod, scope, inst.positionals.func);
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const func_src: LazySrcLoc = .{ .node_offset_call_func = inst_data.src_node };
+    const call_src = inst_data.src();
+    const extra = sema.code.extraData(zir.Inst.Call, inst_data.payload_index);
+    const args = sema.code.extra[extra.end..][0..extra.data.args_len];
+
+    return sema.analyzeCall(block, extra.data.callee, func_src, call_src, modifier, args);
+}
+
+fn analyzeCall(
+    sema: *Sema,
+    block: *Scope.Block,
+    zir_func: zir.Inst.Ref,
+    func_src: LazySrcLoc,
+    call_src: LazySrcLoc,
+    modifier: std.builtin.CallOptions.Modifier,
+    zir_args: []const Ref,
+) InnerError!*ir.Inst {
+    const func = sema.resolveInst(zir_func);
+
     if (func.ty.zigTypeTag() != .Fn)
-        return mod.fail(scope, inst.positionals.func.src, "type '{}' not a function", .{func.ty});
+        return sema.mod.fail(&block.base, func_src, "type '{}' not a function", .{func.ty});
 
     const cc = func.ty.fnCallingConvention();
     if (cc == .Naked) {
         // TODO add error note: declared here
-        return mod.fail(
-            scope,
-            inst.positionals.func.src,
+        return sema.mod.fail(
+            &block.base,
+            func_src,
             "unable to call function with naked calling convention",
             .{},
         );
     }
-    const call_params_len = inst.positionals.args.len;
     const fn_params_len = func.ty.fnParamLen();
     if (func.ty.fnIsVarArgs()) {
         assert(cc == .C);
-        if (call_params_len < fn_params_len) {
+        if (zir_args.len < fn_params_len) {
             // TODO add error note: declared here
-            return mod.fail(
-                scope,
-                inst.positionals.func.src,
+            return sema.mod.fail(
+                &block.base,
+                func_src,
                 "expected at least {d} argument(s), found {d}",
-                .{ fn_params_len, call_params_len },
+                .{ fn_params_len, zir_args.len },
             );
         }
-    } else if (fn_params_len != call_params_len) {
+    } else if (fn_params_len != zir_args.len) {
         // TODO add error note: declared here
-        return mod.fail(
-            scope,
-            inst.positionals.func.src,
+        return sema.mod.fail(
+            &block.base,
+            func_src,
             "expected {d} argument(s), found {d}",
-            .{ fn_params_len, call_params_len },
+            .{ fn_params_len, zir_args.len },
         );
     }
 
-    if (inst.positionals.modifier == .compile_time) {
-        return mod.fail(scope, inst.base.src, "TODO implement comptime function calls", .{});
+    if (modifier == .compile_time) {
+        return sema.mod.fail(&block.base, call_src, "TODO implement comptime function calls", .{});
     }
-    if (inst.positionals.modifier != .auto) {
-        return mod.fail(scope, inst.base.src, "TODO implement call with modifier {}", .{inst.positionals.modifier});
+    if (modifier != .auto) {
+        return sema.mod.fail(&block.base, call_src, "TODO implement call with modifier {}", .{inst.positionals.modifier});
     }
 
     // TODO handle function calls of generic functions
-    const casted_args = try scope.arena().alloc(*Inst, call_params_len);
-    for (inst.positionals.args) |src_arg, i| {
+    const casted_args = try block.arena.alloc(*Inst, zir_args.len);
+    for (zir_args) |zir_arg, i| {
         // the args are already casted to the result of a param type instruction.
-        casted_args[i] = try resolveInst(mod, scope, src_arg);
+        casted_args[i] = sema.resolveInst(block, zir_arg);
     }
 
     const ret_type = func.ty.fnReturnType();
 
-    const b = try mod.requireFunctionBlock(scope, inst.base.src);
-    const is_comptime_call = b.is_comptime or inst.positionals.modifier == .compile_time;
-    const is_inline_call = is_comptime_call or inst.positionals.modifier == .always_inline or
+    try sema.requireFunctionBlock(block, call_src);
+    const is_comptime_call = b.is_comptime or modifier == .compile_time;
+    const is_inline_call = is_comptime_call or modifier == .always_inline or
         func.ty.fnCallingConvention() == .Inline;
     if (is_inline_call) {
-        const func_val = try mod.resolveConstValue(scope, func);
+        const func_val = try sema.resolveConstValue(block, func_src, func);
         const module_fn = switch (func_val.tag()) {
             .function => func_val.castTag(.function).?.data,
-            .extern_fn => return mod.fail(scope, inst.base.src, "{s} call of extern function", .{
+            .extern_fn => return sema.mod.fail(&block.base, call_src, "{s} call of extern function", .{
                 @as([]const u8, if (is_comptime_call) "comptime" else "inline"),
             }),
             else => unreachable,
@@ -1005,24 +1117,24 @@ fn zirCall(mod: *Module, scope: *Scope, inst: *zir.Inst.Call) InnerError!*Inst {
         // set to in the `Scope.Block`.
         // This block instruction will be used to capture the return value from the
         // inlined function.
-        const block_inst = try scope.arena().create(Inst.Block);
+        const block_inst = try block.arena.create(Inst.Block);
         block_inst.* = .{
             .base = .{
                 .tag = Inst.Block.base_tag,
                 .ty = ret_type,
-                .src = inst.base.src,
+                .src = call_src,
             },
             .body = undefined,
         };
         // If this is the top of the inline/comptime call stack, we use this data.
         // Otherwise we pass on the shared data from the parent scope.
-        var shared_inlining = Scope.Block.Inlining.Shared{
+        var shared_inlining: Scope.Block.Inlining.Shared = .{
             .branch_count = 0,
             .caller = b.func,
         };
         // This one is shared among sub-blocks within the same callee, but not
         // shared among the entire inline/comptime call stack.
-        var inlining = Scope.Block.Inlining{
+        var inlining: Scope.Block.Inlining = .{
             .shared = if (b.inlining) |inlining| inlining.shared else &shared_inlining,
             .param_index = 0,
             .casted_args = casted_args,
@@ -1042,7 +1154,7 @@ fn zirCall(mod: *Module, scope: *Scope, inst: *zir.Inst.Call) InnerError!*Inst {
             .owner_decl = scope.ownerDecl().?,
             .src_decl = module_fn.owner_decl,
             .instructions = .{},
-            .arena = scope.arena(),
+            .arena = block.arena,
             .label = null,
             .inlining = &inlining,
             .is_comptime = is_comptime_call,
@@ -1055,121 +1167,101 @@ fn zirCall(mod: *Module, scope: *Scope, inst: *zir.Inst.Call) InnerError!*Inst {
         defer merges.results.deinit(mod.gpa);
         defer merges.br_list.deinit(mod.gpa);
 
-        try mod.emitBackwardBranch(&child_block, inst.base.src);
+        try mod.emitBackwardBranch(&child_block, call_src);
 
         // This will have return instructions analyzed as break instructions to
         // the block_inst above.
-        try analyzeBody(mod, &child_block, module_fn.zir);
+        try sema.body(&child_block, module_fn.zir);
 
         return analyzeBlockBody(mod, scope, &child_block, merges);
     }
 
-    return mod.addCall(b, inst.base.src, ret_type, func, casted_args);
+    return block.addCall(call_src, ret_type, func, casted_args);
 }
 
-fn zirFn(mod: *Module, scope: *Scope, fn_inst: *zir.Inst.Fn) InnerError!*Inst {
+fn zirIntType(sema: *Sema, block: *Scope.Block, inttype: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const fn_type = try resolveType(mod, scope, fn_inst.positionals.fn_type);
-    const new_func = try scope.arena().create(Module.Fn);
-    new_func.* = .{
-        .state = if (fn_type.fnCallingConvention() == .Inline) .inline_only else .queued,
-        .zir = fn_inst.positionals.body,
-        .body = undefined,
-        .owner_decl = scope.ownerDecl().?,
-    };
-    return mod.constInst(scope, fn_inst.base.src, .{
-        .ty = fn_type,
-        .val = try Value.Tag.function.create(scope.arena(), new_func),
-    });
-}
-
-fn zirAwait(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
-    return mod.fail(scope, inst.base.src, "TODO implement await", .{});
-}
-
-fn zirResume(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
-    return mod.fail(scope, inst.base.src, "TODO implement resume", .{});
-}
-
-fn zirSuspend(mod: *Module, scope: *Scope, inst: *zir.Inst.NoOp) InnerError!*Inst {
-    return mod.fail(scope, inst.base.src, "TODO implement suspend", .{});
-}
-
-fn zirSuspendBlock(mod: *Module, scope: *Scope, inst: *zir.Inst.Block) InnerError!*Inst {
-    return mod.fail(scope, inst.base.src, "TODO implement suspend", .{});
+    return sema.mod.fail(&block.base, inttype.base.src, "TODO implement inttype", .{});
 }
 
-fn zirIntType(mod: *Module, scope: *Scope, inttype: *zir.Inst.IntType) InnerError!*Inst {
+fn zirOptionalType(sema: *Sema, block: *Scope.Block, optional: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.fail(scope, inttype.base.src, "TODO implement inttype", .{});
-}
 
-fn zirOptionalType(mod: *Module, scope: *Scope, optional: *zir.Inst.UnOp) InnerError!*Inst {
-    const tracy = trace(@src());
-    defer tracy.end();
-    const child_type = try resolveType(mod, scope, optional.positionals.operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const child_type = try sema.resolveType(block, inst_data.operand);
+    const opt_type = try mod.optionalType(block.arena, child_type);
 
-    return mod.constType(scope, optional.base.src, try mod.optionalType(scope, child_type));
+    return sema.mod.constType(block.arena, inst_data.src(), opt_type);
 }
 
-fn zirOptionalTypeFromPtrElem(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirOptionalTypeFromPtrElem(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const ptr = try resolveInst(mod, scope, inst.positionals.operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const ptr = sema.resolveInst(block, inst_data.operand);
     const elem_ty = ptr.ty.elemType();
+    const opt_ty = try mod.optionalType(block.arena, elem_ty);
 
-    return mod.constType(scope, inst.base.src, try mod.optionalType(scope, elem_ty));
+    return sema.mod.constType(block.arena, inst_data.src(), opt_ty);
 }
 
-fn zirArrayType(mod: *Module, scope: *Scope, array: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirArrayType(sema: *Sema, block: *Scope.Block, array: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
     // TODO these should be lazily evaluated
     const len = try resolveInstConst(mod, scope, array.positionals.lhs);
-    const elem_type = try resolveType(mod, scope, array.positionals.rhs);
+    const elem_type = try sema.resolveType(block, array.positionals.rhs);
 
-    return mod.constType(scope, array.base.src, try mod.arrayType(scope, len.val.toUnsignedInt(), null, elem_type));
+    return sema.mod.constType(block.arena, array.base.src, try mod.arrayType(scope, len.val.toUnsignedInt(), null, elem_type));
 }
 
-fn zirArrayTypeSentinel(mod: *Module, scope: *Scope, array: *zir.Inst.ArrayTypeSentinel) InnerError!*Inst {
+fn zirArrayTypeSentinel(sema: *Sema, block: *Scope.Block, array: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
     // TODO these should be lazily evaluated
     const len = try resolveInstConst(mod, scope, array.positionals.len);
     const sentinel = try resolveInstConst(mod, scope, array.positionals.sentinel);
-    const elem_type = try resolveType(mod, scope, array.positionals.elem_type);
+    const elem_type = try sema.resolveType(block, array.positionals.elem_type);
 
-    return mod.constType(scope, array.base.src, try mod.arrayType(scope, len.val.toUnsignedInt(), sentinel.val, elem_type));
+    return sema.mod.constType(block.arena, array.base.src, try mod.arrayType(scope, len.val.toUnsignedInt(), sentinel.val, elem_type));
 }
 
-fn zirErrorUnionType(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirErrorUnionType(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const error_union = try resolveType(mod, scope, inst.positionals.lhs);
-    const payload = try resolveType(mod, scope, inst.positionals.rhs);
+
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const error_union = try sema.resolveType(block, bin_inst.lhs);
+    const payload = try sema.resolveType(block, bin_inst.rhs);
 
     if (error_union.zigTypeTag() != .ErrorSet) {
-        return mod.fail(scope, inst.base.src, "expected error set type, found {}", .{error_union.elemType()});
+        return sema.mod.fail(&block.base, inst.base.src, "expected error set type, found {}", .{error_union.elemType()});
     }
 
-    return mod.constType(scope, inst.base.src, try mod.errorUnionType(scope, error_union, payload));
+    return sema.mod.constType(block.arena, inst.base.src, try mod.errorUnionType(scope, error_union, payload));
 }
 
-fn zirAnyframeType(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirAnyframeType(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const return_type = try resolveType(mod, scope, inst.positionals.operand);
 
-    return mod.constType(scope, inst.base.src, try mod.anyframeType(scope, return_type));
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const src = inst_data.src();
+    const operand_src: LazySrcLoc = .{ .node_offset_anyframe_type = inst_data.src_node };
+    const return_type = try sema.resolveType(block, operand_src, inst_data.operand);
+    const anyframe_type = try sema.mod.anyframeType(block.arena, return_type);
+
+    return sema.mod.constType(block.arena, src, anyframe_type);
 }
 
-fn zirErrorSet(mod: *Module, scope: *Scope, inst: *zir.Inst.ErrorSet) InnerError!*Inst {
+fn zirErrorSet(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    // The declarations arena will store the hashmap.
+
+    // The owner Decl arena will store the hashmap.
     var new_decl_arena = std.heap.ArenaAllocator.init(mod.gpa);
     errdefer new_decl_arena.deinit();
 
@@ -1186,7 +1278,7 @@ fn zirErrorSet(mod: *Module, scope: *Scope, inst: *zir.Inst.ErrorSet) InnerError
     for (inst.positionals.fields) |field_name| {
         const entry = try mod.getErrorValue(field_name);
         if (payload.data.fields.fetchPutAssumeCapacity(entry.key, {})) |_| {
-            return mod.fail(scope, inst.base.src, "duplicate error: '{s}'", .{field_name});
+            return sema.mod.fail(&block.base, inst.base.src, "duplicate error: '{s}'", .{field_name});
         }
     }
     // TODO create name in format "error:line:column"
@@ -1198,35 +1290,36 @@ fn zirErrorSet(mod: *Module, scope: *Scope, inst: *zir.Inst.ErrorSet) InnerError
     return mod.analyzeDeclVal(scope, inst.base.src, new_decl);
 }
 
-fn zirErrorValue(mod: *Module, scope: *Scope, inst: *zir.Inst.ErrorValue) InnerError!*Inst {
+fn zirErrorValue(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
     // Create an anonymous error set type with only this error value, and return the value.
     const entry = try mod.getErrorValue(inst.positionals.name);
-    const result_type = try Type.Tag.error_set_single.create(scope.arena(), entry.key);
-    return mod.constInst(scope, inst.base.src, .{
+    const result_type = try Type.Tag.error_set_single.create(block.arena, entry.key);
+    return sema.mod.constInst(scope, inst.base.src, .{
         .ty = result_type,
-        .val = try Value.Tag.@"error".create(scope.arena(), .{
+        .val = try Value.Tag.@"error".create(block.arena, .{
             .name = entry.key,
         }),
     });
 }
 
-fn zirMergeErrorSets(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirMergeErrorSets(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const rhs_ty = try resolveType(mod, scope, inst.positionals.rhs);
-    const lhs_ty = try resolveType(mod, scope, inst.positionals.lhs);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const lhs_ty = try sema.resolveType(block, bin_inst.lhs);
+    const rhs_ty = try sema.resolveType(block, bin_inst.rhs);
     if (rhs_ty.zigTypeTag() != .ErrorSet)
-        return mod.fail(scope, inst.positionals.rhs.src, "expected error set type, found {}", .{rhs_ty});
+        return sema.mod.fail(&block.base, inst.positionals.rhs.src, "expected error set type, found {}", .{rhs_ty});
     if (lhs_ty.zigTypeTag() != .ErrorSet)
-        return mod.fail(scope, inst.positionals.lhs.src, "expected error set type, found {}", .{lhs_ty});
+        return sema.mod.fail(&block.base, inst.positionals.lhs.src, "expected error set type, found {}", .{lhs_ty});
 
     // anything merged with anyerror is anyerror
     if (lhs_ty.tag() == .anyerror or rhs_ty.tag() == .anyerror)
-        return mod.constInst(scope, inst.base.src, .{
+        return sema.mod.constInst(scope, inst.base.src, .{
             .ty = Type.initTag(.type),
             .val = Value.initTag(.anyerror_type),
         });
@@ -1291,218 +1384,243 @@ fn zirMergeErrorSets(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerEr
     return mod.analyzeDeclVal(scope, inst.base.src, new_decl);
 }
 
-fn zirEnumLiteral(mod: *Module, scope: *Scope, inst: *zir.Inst.EnumLiteral) InnerError!*Inst {
+fn zirEnumLiteral(sema: *Sema, block: *Scope.Block, zir_inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const duped_name = try scope.arena().dupe(u8, inst.positionals.name);
-    return mod.constInst(scope, inst.base.src, .{
+
+    const duped_name = try block.arena.dupe(u8, inst.positionals.name);
+    return sema.mod.constInst(scope, inst.base.src, .{
         .ty = Type.initTag(.enum_literal),
-        .val = try Value.Tag.enum_literal.create(scope.arena(), duped_name),
+        .val = try Value.Tag.enum_literal.create(block.arena, duped_name),
     });
 }
 
 /// Pointer in, pointer out.
 fn zirOptionalPayloadPtr(
-    mod: *Module,
-    scope: *Scope,
-    unwrap: *zir.Inst.UnOp,
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
     safety_check: bool,
 ) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const optional_ptr = try resolveInst(mod, scope, unwrap.positionals.operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const optional_ptr = sema.resolveInst(block, inst_data.operand);
     assert(optional_ptr.ty.zigTypeTag() == .Pointer);
+    const src = inst_data.src();
 
     const opt_type = optional_ptr.ty.elemType();
     if (opt_type.zigTypeTag() != .Optional) {
-        return mod.fail(scope, unwrap.base.src, "expected optional type, found {}", .{opt_type});
+        return sema.mod.fail(&block.base, src, "expected optional type, found {}", .{opt_type});
     }
 
-    const child_type = try opt_type.optionalChildAlloc(scope.arena());
-    const child_pointer = try mod.simplePtrType(scope, unwrap.base.src, child_type, !optional_ptr.ty.isConstPtr(), .One);
+    const child_type = try opt_type.optionalChildAlloc(block.arena);
+    const child_pointer = try sema.mod.simplePtrType(block.arena, child_type, !optional_ptr.ty.isConstPtr(), .One);
 
     if (optional_ptr.value()) |pointer_val| {
-        const val = try pointer_val.pointerDeref(scope.arena());
+        const val = try pointer_val.pointerDeref(block.arena);
         if (val.isNull()) {
-            return mod.fail(scope, unwrap.base.src, "unable to unwrap null", .{});
+            return sema.mod.fail(&block.base, src, "unable to unwrap null", .{});
         }
         // The same Value represents the pointer to the optional and the payload.
-        return mod.constInst(scope, unwrap.base.src, .{
+        return sema.mod.constInst(scope, src, .{
             .ty = child_pointer,
             .val = pointer_val,
         });
     }
 
-    const b = try mod.requireRuntimeBlock(scope, unwrap.base.src);
-    if (safety_check and mod.wantSafety(scope)) {
-        const is_non_null = try mod.addUnOp(b, unwrap.base.src, Type.initTag(.bool), .is_non_null_ptr, optional_ptr);
+    try sema.requireRuntimeBlock(block, src);
+    if (safety_check and block.wantSafety()) {
+        const is_non_null = try block.addUnOp(src, Type.initTag(.bool), .is_non_null_ptr, optional_ptr);
         try mod.addSafetyCheck(b, is_non_null, .unwrap_null);
     }
-    return mod.addUnOp(b, unwrap.base.src, child_pointer, .optional_payload_ptr, optional_ptr);
+    return block.addUnOp(src, child_pointer, .optional_payload_ptr, optional_ptr);
 }
 
 /// Value in, value out.
 fn zirOptionalPayload(
-    mod: *Module,
-    scope: *Scope,
-    unwrap: *zir.Inst.UnOp,
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
     safety_check: bool,
 ) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const operand = try resolveInst(mod, scope, unwrap.positionals.operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const src = inst_data.src();
+    const operand = sema.resolveInst(block, inst_data.operand);
     const opt_type = operand.ty;
     if (opt_type.zigTypeTag() != .Optional) {
-        return mod.fail(scope, unwrap.base.src, "expected optional type, found {}", .{opt_type});
+        return sema.mod.fail(&block.base, src, "expected optional type, found {}", .{opt_type});
     }
 
-    const child_type = try opt_type.optionalChildAlloc(scope.arena());
+    const child_type = try opt_type.optionalChildAlloc(block.arena);
 
     if (operand.value()) |val| {
         if (val.isNull()) {
-            return mod.fail(scope, unwrap.base.src, "unable to unwrap null", .{});
+            return sema.mod.fail(&block.base, src, "unable to unwrap null", .{});
         }
-        return mod.constInst(scope, unwrap.base.src, .{
+        return sema.mod.constInst(scope, src, .{
             .ty = child_type,
             .val = val,
         });
     }
 
-    const b = try mod.requireRuntimeBlock(scope, unwrap.base.src);
-    if (safety_check and mod.wantSafety(scope)) {
-        const is_non_null = try mod.addUnOp(b, unwrap.base.src, Type.initTag(.bool), .is_non_null, operand);
+    try sema.requireRuntimeBlock(block, src);
+    if (safety_check and block.wantSafety()) {
+        const is_non_null = try block.addUnOp(src, Type.initTag(.bool), .is_non_null, operand);
         try mod.addSafetyCheck(b, is_non_null, .unwrap_null);
     }
-    return mod.addUnOp(b, unwrap.base.src, child_type, .optional_payload, operand);
+    return block.addUnOp(src, child_type, .optional_payload, operand);
 }
 
 /// Value in, value out
-fn zirErrUnionPayload(mod: *Module, scope: *Scope, unwrap: *zir.Inst.UnOp, safety_check: bool) InnerError!*Inst {
+fn zirErrUnionPayload(
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
+    safety_check: bool,
+) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const operand = try resolveInst(mod, scope, unwrap.positionals.operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const src = inst_data.src();
+    const operand = sema.resolveInst(block, inst_data.operand);
     if (operand.ty.zigTypeTag() != .ErrorUnion)
-        return mod.fail(scope, operand.src, "expected error union type, found '{}'", .{operand.ty});
+        return sema.mod.fail(&block.base, operand.src, "expected error union type, found '{}'", .{operand.ty});
 
     if (operand.value()) |val| {
         if (val.getError()) |name| {
-            return mod.fail(scope, unwrap.base.src, "caught unexpected error '{s}'", .{name});
+            return sema.mod.fail(&block.base, src, "caught unexpected error '{s}'", .{name});
         }
         const data = val.castTag(.error_union).?.data;
-        return mod.constInst(scope, unwrap.base.src, .{
+        return sema.mod.constInst(scope, src, .{
             .ty = operand.ty.castTag(.error_union).?.data.payload,
             .val = data,
         });
     }
-    const b = try mod.requireRuntimeBlock(scope, unwrap.base.src);
-    if (safety_check and mod.wantSafety(scope)) {
-        const is_non_err = try mod.addUnOp(b, unwrap.base.src, Type.initTag(.bool), .is_err, operand);
+    try sema.requireRuntimeBlock(block, src);
+    if (safety_check and block.wantSafety()) {
+        const is_non_err = try block.addUnOp(src, Type.initTag(.bool), .is_err, operand);
         try mod.addSafetyCheck(b, is_non_err, .unwrap_errunion);
     }
-    return mod.addUnOp(b, unwrap.base.src, operand.ty.castTag(.error_union).?.data.payload, .unwrap_errunion_payload, operand);
+    return block.addUnOp(src, operand.ty.castTag(.error_union).?.data.payload, .unwrap_errunion_payload, operand);
 }
 
-/// Pointer in, pointer out
-fn zirErrUnionPayloadPtr(mod: *Module, scope: *Scope, unwrap: *zir.Inst.UnOp, safety_check: bool) InnerError!*Inst {
+/// Pointer in, pointer out.
+fn zirErrUnionPayloadPtr(
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
+    safety_check: bool,
+) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const operand = try resolveInst(mod, scope, unwrap.positionals.operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const src = inst_data.src();
+    const operand = sema.resolveInst(block, inst_data.operand);
     assert(operand.ty.zigTypeTag() == .Pointer);
 
     if (operand.ty.elemType().zigTypeTag() != .ErrorUnion)
-        return mod.fail(scope, unwrap.base.src, "expected error union type, found {}", .{operand.ty.elemType()});
+        return sema.mod.fail(&block.base, src, "expected error union type, found {}", .{operand.ty.elemType()});
 
-    const operand_pointer_ty = try mod.simplePtrType(scope, unwrap.base.src, operand.ty.elemType().castTag(.error_union).?.data.payload, !operand.ty.isConstPtr(), .One);
+    const operand_pointer_ty = try sema.mod.simplePtrType(block.arena, operand.ty.elemType().castTag(.error_union).?.data.payload, !operand.ty.isConstPtr(), .One);
 
     if (operand.value()) |pointer_val| {
-        const val = try pointer_val.pointerDeref(scope.arena());
+        const val = try pointer_val.pointerDeref(block.arena);
         if (val.getError()) |name| {
-            return mod.fail(scope, unwrap.base.src, "caught unexpected error '{s}'", .{name});
+            return sema.mod.fail(&block.base, src, "caught unexpected error '{s}'", .{name});
         }
         const data = val.castTag(.error_union).?.data;
         // The same Value represents the pointer to the error union and the payload.
-        return mod.constInst(scope, unwrap.base.src, .{
+        return sema.mod.constInst(scope, src, .{
             .ty = operand_pointer_ty,
             .val = try Value.Tag.ref_val.create(
-                scope.arena(),
+                block.arena,
                 data,
             ),
         });
     }
 
-    const b = try mod.requireRuntimeBlock(scope, unwrap.base.src);
-    if (safety_check and mod.wantSafety(scope)) {
-        const is_non_err = try mod.addUnOp(b, unwrap.base.src, Type.initTag(.bool), .is_err, operand);
+    try sema.requireRuntimeBlock(block, src);
+    if (safety_check and block.wantSafety()) {
+        const is_non_err = try block.addUnOp(src, Type.initTag(.bool), .is_err, operand);
         try mod.addSafetyCheck(b, is_non_err, .unwrap_errunion);
     }
-    return mod.addUnOp(b, unwrap.base.src, operand_pointer_ty, .unwrap_errunion_payload_ptr, operand);
+    return block.addUnOp(src, operand_pointer_ty, .unwrap_errunion_payload_ptr, operand);
 }
 
 /// Value in, value out
-fn zirErrUnionCode(mod: *Module, scope: *Scope, unwrap: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirErrUnionCode(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const operand = try resolveInst(mod, scope, unwrap.positionals.operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const src = inst_data.src();
+    const operand = sema.resolveInst(block, inst_data.operand);
     if (operand.ty.zigTypeTag() != .ErrorUnion)
-        return mod.fail(scope, unwrap.base.src, "expected error union type, found '{}'", .{operand.ty});
+        return sema.mod.fail(&block.base, src, "expected error union type, found '{}'", .{operand.ty});
 
     if (operand.value()) |val| {
         assert(val.getError() != null);
         const data = val.castTag(.error_union).?.data;
-        return mod.constInst(scope, unwrap.base.src, .{
+        return sema.mod.constInst(scope, src, .{
             .ty = operand.ty.castTag(.error_union).?.data.error_set,
             .val = data,
         });
     }
 
-    const b = try mod.requireRuntimeBlock(scope, unwrap.base.src);
-    return mod.addUnOp(b, unwrap.base.src, operand.ty.castTag(.error_union).?.data.payload, .unwrap_errunion_err, operand);
+    try sema.requireRuntimeBlock(block, src);
+    return block.addUnOp(src, operand.ty.castTag(.error_union).?.data.payload, .unwrap_errunion_err, operand);
 }
 
 /// Pointer in, value out
-fn zirErrUnionCodePtr(mod: *Module, scope: *Scope, unwrap: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirErrUnionCodePtr(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const operand = try resolveInst(mod, scope, unwrap.positionals.operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const src = inst_data.src();
+    const operand = sema.resolveInst(block, inst_data.operand);
     assert(operand.ty.zigTypeTag() == .Pointer);
 
     if (operand.ty.elemType().zigTypeTag() != .ErrorUnion)
-        return mod.fail(scope, unwrap.base.src, "expected error union type, found {}", .{operand.ty.elemType()});
+        return sema.mod.fail(&block.base, src, "expected error union type, found {}", .{operand.ty.elemType()});
 
     if (operand.value()) |pointer_val| {
-        const val = try pointer_val.pointerDeref(scope.arena());
+        const val = try pointer_val.pointerDeref(block.arena);
         assert(val.getError() != null);
         const data = val.castTag(.error_union).?.data;
-        return mod.constInst(scope, unwrap.base.src, .{
+        return sema.mod.constInst(scope, src, .{
             .ty = operand.ty.elemType().castTag(.error_union).?.data.error_set,
             .val = data,
         });
     }
 
-    const b = try mod.requireRuntimeBlock(scope, unwrap.base.src);
-    return mod.addUnOp(b, unwrap.base.src, operand.ty.castTag(.error_union).?.data.payload, .unwrap_errunion_err_ptr, operand);
+    try sema.requireRuntimeBlock(block, src);
+    return block.addUnOp(src, operand.ty.castTag(.error_union).?.data.payload, .unwrap_errunion_err_ptr, operand);
 }
 
-fn zirEnsureErrPayloadVoid(mod: *Module, scope: *Scope, unwrap: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirEnsureErrPayloadVoid(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const operand = try resolveInst(mod, scope, unwrap.positionals.operand);
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const src = inst_data.src();
+    const operand = sema.resolveInst(block, inst_data.operand);
     if (operand.ty.zigTypeTag() != .ErrorUnion)
-        return mod.fail(scope, unwrap.base.src, "expected error union type, found '{}'", .{operand.ty});
+        return sema.mod.fail(&block.base, src, "expected error union type, found '{}'", .{operand.ty});
     if (operand.ty.castTag(.error_union).?.data.payload.zigTypeTag() != .Void) {
-        return mod.fail(scope, unwrap.base.src, "expression value is ignored", .{});
+        return sema.mod.fail(&block.base, src, "expression value is ignored", .{});
     }
-    return mod.constVoid(scope, unwrap.base.src);
+    return sema.mod.constVoid(block.arena, .unneeded);
 }
 
-fn zirFnType(mod: *Module, scope: *Scope, fntype: *zir.Inst.FnType, var_args: bool) InnerError!*Inst {
+fn zirFnType(sema: *Sema, block: *Scope.Block, fntype: zir.Inst.Index, var_args: bool) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
@@ -1517,7 +1635,7 @@ fn zirFnType(mod: *Module, scope: *Scope, fntype: *zir.Inst.FnType, var_args: bo
     );
 }
 
-fn zirFnTypeCc(mod: *Module, scope: *Scope, fntype: *zir.Inst.FnTypeCc, var_args: bool) InnerError!*Inst {
+fn zirFnTypeCc(sema: *Sema, block: *Scope.Block, fntype: zir.Inst.Index, var_args: bool) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
@@ -1526,7 +1644,7 @@ fn zirFnTypeCc(mod: *Module, scope: *Scope, fntype: *zir.Inst.FnTypeCc, var_args
     // std.builtin, this needs to change
     const cc_str = cc_tv.val.castTag(.enum_literal).?.data;
     const cc = std.meta.stringToEnum(std.builtin.CallingConvention, cc_str) orelse
-        return mod.fail(scope, fntype.positionals.cc.src, "Unknown calling convention {s}", .{cc_str});
+        return sema.mod.fail(&block.base, fntype.positionals.cc.src, "Unknown calling convention {s}", .{cc_str});
     return fnTypeCommon(
         mod,
         scope,
@@ -1539,129 +1657,144 @@ fn zirFnTypeCc(mod: *Module, scope: *Scope, fntype: *zir.Inst.FnTypeCc, var_args
 }
 
 fn fnTypeCommon(
-    mod: *Module,
-    scope: *Scope,
-    zir_inst: *zir.Inst,
-    zir_param_types: []*zir.Inst,
-    zir_return_type: *zir.Inst,
+    sema: *Sema,
+    block: *Scope.Block,
+    zir_inst: zir.Inst.Index,
+    zir_param_types: []zir.Inst.Index,
+    zir_return_type: zir.Inst.Index,
     cc: std.builtin.CallingConvention,
     var_args: bool,
 ) InnerError!*Inst {
-    const return_type = try resolveType(mod, scope, zir_return_type);
+    const return_type = try sema.resolveType(block, zir_return_type);
 
     // Hot path for some common function types.
     if (zir_param_types.len == 0 and !var_args) {
         if (return_type.zigTypeTag() == .NoReturn and cc == .Unspecified) {
-            return mod.constType(scope, zir_inst.src, Type.initTag(.fn_noreturn_no_args));
+            return sema.mod.constType(block.arena, zir_inst.src, Type.initTag(.fn_noreturn_no_args));
         }
 
         if (return_type.zigTypeTag() == .Void and cc == .Unspecified) {
-            return mod.constType(scope, zir_inst.src, Type.initTag(.fn_void_no_args));
+            return sema.mod.constType(block.arena, zir_inst.src, Type.initTag(.fn_void_no_args));
         }
 
         if (return_type.zigTypeTag() == .NoReturn and cc == .Naked) {
-            return mod.constType(scope, zir_inst.src, Type.initTag(.fn_naked_noreturn_no_args));
+            return sema.mod.constType(block.arena, zir_inst.src, Type.initTag(.fn_naked_noreturn_no_args));
         }
 
         if (return_type.zigTypeTag() == .Void and cc == .C) {
-            return mod.constType(scope, zir_inst.src, Type.initTag(.fn_ccc_void_no_args));
+            return sema.mod.constType(block.arena, zir_inst.src, Type.initTag(.fn_ccc_void_no_args));
         }
     }
 
-    const arena = scope.arena();
-    const param_types = try arena.alloc(Type, zir_param_types.len);
+    const param_types = try block.arena.alloc(Type, zir_param_types.len);
     for (zir_param_types) |param_type, i| {
-        const resolved = try resolveType(mod, scope, param_type);
+        const resolved = try sema.resolveType(block, param_type);
         // TODO skip for comptime params
         if (!resolved.isValidVarType(false)) {
-            return mod.fail(scope, param_type.src, "parameter of type '{}' must be declared comptime", .{resolved});
+            return sema.mod.fail(&block.base, param_type.src, "parameter of type '{}' must be declared comptime", .{resolved});
         }
         param_types[i] = resolved;
     }
 
-    const fn_ty = try Type.Tag.function.create(arena, .{
+    const fn_ty = try Type.Tag.function.create(block.arena, .{
         .param_types = param_types,
         .return_type = return_type,
         .cc = cc,
         .is_var_args = var_args,
     });
-    return mod.constType(scope, zir_inst.src, fn_ty);
+    return sema.mod.constType(block.arena, zir_inst.src, fn_ty);
 }
 
-fn zirPrimitive(mod: *Module, scope: *Scope, primitive: *zir.Inst.Primitive) InnerError!*Inst {
+fn zirAs(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.constInst(scope, primitive.base.src, primitive.positionals.tag.toTypedValue());
-}
 
-fn zirAs(mod: *Module, scope: *Scope, as: *zir.Inst.BinOp) InnerError!*Inst {
-    const tracy = trace(@src());
-    defer tracy.end();
-    const dest_type = try resolveType(mod, scope, as.positionals.lhs);
-    const new_inst = try resolveInst(mod, scope, as.positionals.rhs);
-    return mod.coerce(scope, dest_type, new_inst);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const dest_type = try sema.resolveType(block, bin_inst.lhs);
+    const tzir_inst = sema.resolveInst(block, bin_inst.rhs);
+    return sema.coerce(scope, dest_type, tzir_inst);
 }
 
-fn zirPtrtoint(mod: *Module, scope: *Scope, ptrtoint: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirPtrtoint(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const ptr = try resolveInst(mod, scope, ptrtoint.positionals.operand);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const ptr = sema.resolveInst(block, inst_data.operand);
     if (ptr.ty.zigTypeTag() != .Pointer) {
-        return mod.fail(scope, ptrtoint.positionals.operand.src, "expected pointer, found '{}'", .{ptr.ty});
+        const ptr_src: LazySrcLoc = .{ .node_offset_builtin_call_arg0 = inst_data.src_node };
+        return sema.mod.fail(&block.base, ptr_src, "expected pointer, found '{}'", .{ptr.ty});
     }
     // TODO handle known-pointer-address
-    const b = try mod.requireRuntimeBlock(scope, ptrtoint.base.src);
+    const src = inst_data.src();
+    try sema.requireRuntimeBlock(block, src);
     const ty = Type.initTag(.usize);
-    return mod.addUnOp(b, ptrtoint.base.src, ty, .ptrtoint, ptr);
+    return block.addUnOp(src, ty, .ptrtoint, ptr);
 }
 
-fn zirFieldVal(mod: *Module, scope: *Scope, inst: *zir.Inst.Field) InnerError!*Inst {
+fn zirFieldVal(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const object = try resolveInst(mod, scope, inst.positionals.object);
-    const field_name = inst.positionals.field_name;
-    const object_ptr = try mod.analyzeRef(scope, inst.base.src, object);
-    const result_ptr = try mod.namedFieldPtr(scope, inst.base.src, object_ptr, field_name, inst.base.src);
-    return mod.analyzeDeref(scope, inst.base.src, result_ptr, result_ptr.src);
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const field_name_src: LazySrcLoc = .{ .node_offset_field_name = inst_data.src_node };
+    const extra = sema.code.extraData(zir.Inst.Field, inst_data.payload_index).data;
+    const field_name = sema.code.string_bytes[extra.field_name_start..][0..extra.field_name_len];
+    const object = sema.resolveInst(block, extra.lhs);
+    const object_ptr = try sema.analyzeRef(block, src, object);
+    const result_ptr = try sema.namedFieldPtr(block, src, object_ptr, field_name, field_name_src);
+    return sema.analyzeDeref(block, src, result_ptr, result_ptr.src);
 }
 
-fn zirFieldPtr(mod: *Module, scope: *Scope, inst: *zir.Inst.Field) InnerError!*Inst {
+fn zirFieldPtr(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const object_ptr = try resolveInst(mod, scope, inst.positionals.object);
-    const field_name = inst.positionals.field_name;
-    return mod.namedFieldPtr(scope, inst.base.src, object_ptr, field_name, inst.base.src);
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const field_name_src: LazySrcLoc = .{ .node_offset_field_name = inst_data.src_node };
+    const extra = sema.code.extraData(zir.Inst.Field, inst_data.payload_index).data;
+    const field_name = sema.code.string_bytes[extra.field_name_start..][0..extra.field_name_len];
+    const object_ptr = sema.resolveInst(block, extra.lhs);
+    return sema.namedFieldPtr(block, src, object_ptr, field_name, field_name_src);
 }
 
-fn zirFieldValNamed(mod: *Module, scope: *Scope, inst: *zir.Inst.FieldNamed) InnerError!*Inst {
+fn zirFieldValNamed(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const object = try resolveInst(mod, scope, inst.positionals.object);
-    const field_name = try resolveConstString(mod, scope, inst.positionals.field_name);
-    const fsrc = inst.positionals.field_name.src;
-    const object_ptr = try mod.analyzeRef(scope, inst.base.src, object);
-    const result_ptr = try mod.namedFieldPtr(scope, inst.base.src, object_ptr, field_name, fsrc);
-    return mod.analyzeDeref(scope, inst.base.src, result_ptr, result_ptr.src);
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const field_name_src: LazySrcLoc = .{ .node_offset_builtin_call_arg1 = inst_data.src_node };
+    const extra = sema.code.extraData(zir.Inst.FieldNamed, inst_data.payload_index).data;
+    const object = sema.resolveInst(block, extra.lhs);
+    const field_name = try sema.resolveConstString(block, field_name_src, extra.field_name);
+    const object_ptr = try sema.analyzeRef(block, src, object);
+    const result_ptr = try sema.namedFieldPtr(block, src, object_ptr, field_name, field_name_src);
+    return sema.analyzeDeref(block, src, result_ptr, src);
 }
 
-fn zirFieldPtrNamed(mod: *Module, scope: *Scope, inst: *zir.Inst.FieldNamed) InnerError!*Inst {
+fn zirFieldPtrNamed(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const object_ptr = try resolveInst(mod, scope, inst.positionals.object);
-    const field_name = try resolveConstString(mod, scope, inst.positionals.field_name);
-    const fsrc = inst.positionals.field_name.src;
-    return mod.namedFieldPtr(scope, inst.base.src, object_ptr, field_name, fsrc);
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const field_name_src: LazySrcLoc = .{ .node_offset_builtin_call_arg1 = inst_data.src_node };
+    const extra = sema.code.extraData(zir.Inst.FieldNamed, inst_data.payload_index).data;
+    const object_ptr = sema.resolveInst(block, extra.lhs);
+    const field_name = try sema.resolveConstString(block, field_name_src, extra.field_name);
+    return sema.namedFieldPtr(block, src, object_ptr, field_name, field_name_src);
 }
 
-fn zirIntcast(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirIntcast(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const dest_type = try resolveType(mod, scope, inst.positionals.lhs);
-    const operand = try resolveInst(mod, scope, inst.positionals.rhs);
+
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const dest_type = try sema.resolveType(block, bin_inst.lhs);
+    const operand = sema.resolveInst(bin_inst.rhs);
 
     const dest_is_comptime_int = switch (dest_type.zigTypeTag()) {
         .ComptimeInt => true,
@@ -1687,27 +1820,31 @@ fn zirIntcast(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*In
     }
 
     if (operand.value() != null) {
-        return mod.coerce(scope, dest_type, operand);
+        return sema.coerce(scope, dest_type, operand);
     } else if (dest_is_comptime_int) {
-        return mod.fail(scope, inst.base.src, "unable to cast runtime value to 'comptime_int'", .{});
+        return sema.mod.fail(&block.base, inst.base.src, "unable to cast runtime value to 'comptime_int'", .{});
     }
 
-    return mod.fail(scope, inst.base.src, "TODO implement analyze widen or shorten int", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement analyze widen or shorten int", .{});
 }
 
-fn zirBitcast(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirBitcast(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const dest_type = try resolveType(mod, scope, inst.positionals.lhs);
-    const operand = try resolveInst(mod, scope, inst.positionals.rhs);
+
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const dest_type = try sema.resolveType(block, bin_inst.lhs);
+    const operand = sema.resolveInst(bin_inst.rhs);
     return mod.bitcast(scope, dest_type, operand);
 }
 
-fn zirFloatcast(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirFloatcast(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const dest_type = try resolveType(mod, scope, inst.positionals.lhs);
-    const operand = try resolveInst(mod, scope, inst.positionals.rhs);
+
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const dest_type = try sema.resolveType(block, bin_inst.lhs);
+    const operand = sema.resolveInst(bin_inst.rhs);
 
     const dest_is_comptime_float = switch (dest_type.zigTypeTag()) {
         .ComptimeFloat => true,
@@ -1733,110 +1870,172 @@ fn zirFloatcast(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*
     }
 
     if (operand.value() != null) {
-        return mod.coerce(scope, dest_type, operand);
+        return sema.coerce(scope, dest_type, operand);
     } else if (dest_is_comptime_float) {
-        return mod.fail(scope, inst.base.src, "unable to cast runtime value to 'comptime_float'", .{});
+        return sema.mod.fail(&block.base, inst.base.src, "unable to cast runtime value to 'comptime_float'", .{});
     }
 
-    return mod.fail(scope, inst.base.src, "TODO implement analyze widen or shorten float", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement analyze widen or shorten float", .{});
 }
 
-fn zirElemVal(mod: *Module, scope: *Scope, inst: *zir.Inst.Elem) InnerError!*Inst {
+fn zirElemVal(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const array = try resolveInst(mod, scope, inst.positionals.array);
-    const array_ptr = try mod.analyzeRef(scope, inst.base.src, array);
-    const elem_index = try resolveInst(mod, scope, inst.positionals.index);
-    const result_ptr = try mod.elemPtr(scope, inst.base.src, array_ptr, elem_index);
-    return mod.analyzeDeref(scope, inst.base.src, result_ptr, result_ptr.src);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const array = sema.resolveInst(block, bin_inst.lhs);
+    const array_ptr = try sema.analyzeRef(block, sema.src, array);
+    const elem_index = sema.resolveInst(block, bin_inst.rhs);
+    const result_ptr = try sema.elemPtr(block, sema.src, array_ptr, elem_index, sema.src);
+    return sema.analyzeDeref(block, sema.src, result_ptr, sema.src);
 }
 
-fn zirElemPtr(mod: *Module, scope: *Scope, inst: *zir.Inst.Elem) InnerError!*Inst {
+fn zirElemValNode(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const array_ptr = try resolveInst(mod, scope, inst.positionals.array);
-    const elem_index = try resolveInst(mod, scope, inst.positionals.index);
-    return mod.elemPtr(scope, inst.base.src, array_ptr, elem_index);
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const elem_index_src: LazySrcLoc = .{ .node_offset_array_access_index = inst_data.src_node };
+    const extra = sema.code.extraData(zir.Inst.Bin, inst_data.payload_index).data;
+    const array = sema.resolveInst(block, extra.lhs);
+    const array_ptr = try sema.analyzeRef(block, src, array);
+    const elem_index = sema.resolveInst(block, extra.rhs);
+    const result_ptr = try sema.elemPtr(block, src, array_ptr, elem_index, elem_index_src);
+    return sema.analyzeDeref(block, src, result_ptr, src);
 }
 
-fn zirSlice(mod: *Module, scope: *Scope, inst: *zir.Inst.Slice) InnerError!*Inst {
+fn zirElemPtr(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const array_ptr = try resolveInst(mod, scope, inst.positionals.array_ptr);
-    const start = try resolveInst(mod, scope, inst.positionals.start);
-    const end = if (inst.kw_args.end) |end| try resolveInst(mod, scope, end) else null;
-    const sentinel = if (inst.kw_args.sentinel) |sentinel| try resolveInst(mod, scope, sentinel) else null;
 
-    return mod.analyzeSlice(scope, inst.base.src, array_ptr, start, end, sentinel);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const array_ptr = sema.resolveInst(block, bin_inst.lhs);
+    const elem_index = sema.resolveInst(block, bin_inst.rhs);
+    return sema.elemPtr(block, sema.src, array_ptr, elem_index, sema.src);
 }
 
-fn zirSliceStart(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirElemPtrNode(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const array_ptr = try resolveInst(mod, scope, inst.positionals.lhs);
-    const start = try resolveInst(mod, scope, inst.positionals.rhs);
 
-    return mod.analyzeSlice(scope, inst.base.src, array_ptr, start, null, null);
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const elem_index_src: LazySrcLoc = .{ .node_offset_array_access_index = inst_data.src_node };
+    const extra = sema.code.extraData(zir.Inst.Bin, inst_data.payload_index).data;
+    const array_ptr = sema.resolveInst(block, extra.lhs);
+    const elem_index = sema.resolveInst(block, extra.rhs);
+    return sema.elemPtr(block, src, array_ptr, elem_index, elem_index_src);
 }
 
-fn zirSwitchRange(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirSliceStart(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const start = try resolveInst(mod, scope, inst.positionals.lhs);
-    const end = try resolveInst(mod, scope, inst.positionals.rhs);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const extra = sema.code.extraData(zir.Inst.SliceStart, inst_data.payload_index).data;
+    const array_ptr = sema.resolveInst(extra.lhs);
+    const start = sema.resolveInst(extra.start);
+
+    return sema.analyzeSlice(block, src, array_ptr, start, null, null, .unneeded);
+}
+
+fn zirSliceEnd(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
+    const tracy = trace(@src());
+    defer tracy.end();
+
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const extra = sema.code.extraData(zir.Inst.SliceEnd, inst_data.payload_index).data;
+    const array_ptr = sema.resolveInst(extra.lhs);
+    const start = sema.resolveInst(extra.start);
+    const end = sema.resolveInst(extra.end);
+
+    return sema.analyzeSlice(block, src, array_ptr, start, end, null, .unneeded);
+}
+
+fn zirSliceSentinel(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
+    const tracy = trace(@src());
+    defer tracy.end();
+
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const sentinel_src: LazySrcLoc = .{ .node_offset_slice_sentinel = inst_data.src_node };
+    const extra = sema.code.extraData(zir.Inst.SliceSentinel, inst_data.payload_index).data;
+    const array_ptr = sema.resolveInst(extra.lhs);
+    const start = sema.resolveInst(extra.start);
+    const end = sema.resolveInst(extra.end);
+    const sentinel = sema.resolveInst(extra.sentinel);
+
+    return sema.analyzeSlice(block, inst.base.src, array_ptr, start, end, sentinel, sentinel_src);
+}
+
+fn zirSwitchRange(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
+    const tracy = trace(@src());
+    defer tracy.end();
+
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const start = sema.resolveInst(bin_inst.lhs);
+    const end = sema.resolveInst(bin_inst.rhs);
 
     switch (start.ty.zigTypeTag()) {
         .Int, .ComptimeInt => {},
-        else => return mod.constVoid(scope, inst.base.src),
+        else => return sema.mod.constVoid(block.arena, .unneeded),
     }
     switch (end.ty.zigTypeTag()) {
         .Int, .ComptimeInt => {},
-        else => return mod.constVoid(scope, inst.base.src),
+        else => return sema.mod.constVoid(block.arena, .unneeded),
     }
     // .switch_range must be inside a comptime scope
     const start_val = start.value().?;
     const end_val = end.value().?;
     if (start_val.compare(.gte, end_val)) {
-        return mod.fail(scope, inst.base.src, "range start value must be smaller than the end value", .{});
+        return sema.mod.fail(&block.base, inst.base.src, "range start value must be smaller than the end value", .{});
     }
-    return mod.constVoid(scope, inst.base.src);
+    return sema.mod.constVoid(block.arena, .unneeded);
 }
 
-fn zirSwitchBr(mod: *Module, scope: *Scope, inst: *zir.Inst.SwitchBr, ref: bool) InnerError!*Inst {
+fn zirSwitchBr(
+    sema: *Sema,
+    parent_block: *Scope.Block,
+    inst: zir.Inst.Index,
+    ref: bool,
+) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const target_ptr = try resolveInst(mod, scope, inst.positionals.target);
+    if (true) @panic("TODO rework with zir-memory-layout in mind");
+
+    const target_ptr = sema.resolveInst(block, inst.positionals.target);
     const target = if (ref)
-        try mod.analyzeDeref(scope, inst.base.src, target_ptr, inst.positionals.target.src)
+        try sema.analyzeDeref(block, inst.base.src, target_ptr, inst.positionals.target.src)
     else
         target_ptr;
     try validateSwitch(mod, scope, target, inst);
 
     if (try mod.resolveDefinedValue(scope, target)) |target_val| {
         for (inst.positionals.cases) |case| {
-            const resolved = try resolveInst(mod, scope, case.item);
-            const casted = try mod.coerce(scope, target.ty, resolved);
-            const item = try mod.resolveConstValue(scope, casted);
+            const resolved = sema.resolveInst(block, case.item);
+            const casted = try sema.coerce(scope, target.ty, resolved);
+            const item = try sema.resolveConstValue(parent_block, case_src, casted);
 
             if (target_val.eql(item)) {
-                try analyzeBody(mod, scope.cast(Scope.Block).?, case.body);
+                try sema.body(scope.cast(Scope.Block).?, case.body);
                 return mod.constNoReturn(scope, inst.base.src);
             }
         }
-        try analyzeBody(mod, scope.cast(Scope.Block).?, inst.positionals.else_body);
+        try sema.body(scope.cast(Scope.Block).?, inst.positionals.else_body);
         return mod.constNoReturn(scope, inst.base.src);
     }
 
     if (inst.positionals.cases.len == 0) {
         // no cases just analyze else_branch
-        try analyzeBody(mod, scope.cast(Scope.Block).?, inst.positionals.else_body);
+        try sema.body(scope.cast(Scope.Block).?, inst.positionals.else_body);
         return mod.constNoReturn(scope, inst.base.src);
     }
 
-    const parent_block = try mod.requireRuntimeBlock(scope, inst.base.src);
+    try sema.requireRuntimeBlock(parent_block, inst.base.src);
     const cases = try parent_block.arena.alloc(Inst.SwitchBr.Case, inst.positionals.cases.len);
 
     var case_block: Scope.Block = .{
@@ -1857,11 +2056,11 @@ fn zirSwitchBr(mod: *Module, scope: *Scope, inst: *zir.Inst.SwitchBr, ref: bool)
         // Reset without freeing.
         case_block.instructions.items.len = 0;
 
-        const resolved = try resolveInst(mod, scope, case.item);
-        const casted = try mod.coerce(scope, target.ty, resolved);
-        const item = try mod.resolveConstValue(scope, casted);
+        const resolved = sema.resolveInst(block, case.item);
+        const casted = try sema.coerce(scope, target.ty, resolved);
+        const item = try sema.resolveConstValue(parent_block, case_src, casted);
 
-        try analyzeBody(mod, &case_block, case.body);
+        try sema.body(&case_block, case.body);
 
         cases[i] = .{
             .item = item,
@@ -1870,7 +2069,7 @@ fn zirSwitchBr(mod: *Module, scope: *Scope, inst: *zir.Inst.SwitchBr, ref: bool)
     }
 
     case_block.instructions.items.len = 0;
-    try analyzeBody(mod, &case_block, inst.positionals.else_body);
+    try sema.body(&case_block, inst.positionals.else_body);
 
     const else_body: ir.Body = .{
         .instructions = try parent_block.arena.dupe(*Inst, case_block.instructions.items),
@@ -1879,10 +2078,10 @@ fn zirSwitchBr(mod: *Module, scope: *Scope, inst: *zir.Inst.SwitchBr, ref: bool)
     return mod.addSwitchBr(parent_block, inst.base.src, target, cases, else_body);
 }
 
-fn validateSwitch(mod: *Module, scope: *Scope, target: *Inst, inst: *zir.Inst.SwitchBr) InnerError!void {
+fn validateSwitch(sema: *Sema, block: *Scope.Block, target: *Inst, inst: zir.Inst.Index) InnerError!void {
     // validate usage of '_' prongs
     if (inst.positionals.special_prong == .underscore and target.ty.zigTypeTag() != .Enum) {
-        return mod.fail(scope, inst.base.src, "'_' prong only allowed when switching on non-exhaustive enums", .{});
+        return sema.mod.fail(&block.base, inst.base.src, "'_' prong only allowed when switching on non-exhaustive enums", .{});
         // TODO notes "'_' prong here" inst.positionals.cases[last].src
     }
 
@@ -1891,7 +2090,7 @@ fn validateSwitch(mod: *Module, scope: *Scope, target: *Inst, inst: *zir.Inst.Sw
         switch (target.ty.zigTypeTag()) {
             .Int, .ComptimeInt => {},
             else => {
-                return mod.fail(scope, target.src, "ranges not allowed when switching on type {}", .{target.ty});
+                return sema.mod.fail(&block.base, target.src, "ranges not allowed when switching on type {}", .{target.ty});
                 // TODO notes "range used here" range_inst.src
             },
         }
@@ -1899,34 +2098,34 @@ fn validateSwitch(mod: *Module, scope: *Scope, target: *Inst, inst: *zir.Inst.Sw
 
     // validate for duplicate items/missing else prong
     switch (target.ty.zigTypeTag()) {
-        .Enum => return mod.fail(scope, inst.base.src, "TODO validateSwitch .Enum", .{}),
-        .ErrorSet => return mod.fail(scope, inst.base.src, "TODO validateSwitch .ErrorSet", .{}),
-        .Union => return mod.fail(scope, inst.base.src, "TODO validateSwitch .Union", .{}),
+        .Enum => return sema.mod.fail(&block.base, inst.base.src, "TODO validateSwitch .Enum", .{}),
+        .ErrorSet => return sema.mod.fail(&block.base, inst.base.src, "TODO validateSwitch .ErrorSet", .{}),
+        .Union => return sema.mod.fail(&block.base, inst.base.src, "TODO validateSwitch .Union", .{}),
         .Int, .ComptimeInt => {
             var range_set = @import("RangeSet.zig").init(mod.gpa);
             defer range_set.deinit();
 
             for (inst.positionals.items) |item| {
                 const maybe_src = if (item.castTag(.switch_range)) |range| blk: {
-                    const start_resolved = try resolveInst(mod, scope, range.positionals.lhs);
-                    const start_casted = try mod.coerce(scope, target.ty, start_resolved);
-                    const end_resolved = try resolveInst(mod, scope, range.positionals.rhs);
-                    const end_casted = try mod.coerce(scope, target.ty, end_resolved);
+                    const start_resolved = sema.resolveInst(block, range.positionals.lhs);
+                    const start_casted = try sema.coerce(scope, target.ty, start_resolved);
+                    const end_resolved = sema.resolveInst(block, range.positionals.rhs);
+                    const end_casted = try sema.coerce(scope, target.ty, end_resolved);
 
                     break :blk try range_set.add(
-                        try mod.resolveConstValue(scope, start_casted),
-                        try mod.resolveConstValue(scope, end_casted),
+                        try sema.resolveConstValue(block, range_start_src, start_casted),
+                        try sema.resolveConstValue(block, range_end_src, end_casted),
                         item.src,
                     );
                 } else blk: {
-                    const resolved = try resolveInst(mod, scope, item);
-                    const casted = try mod.coerce(scope, target.ty, resolved);
-                    const value = try mod.resolveConstValue(scope, casted);
+                    const resolved = sema.resolveInst(block, item);
+                    const casted = try sema.coerce(scope, target.ty, resolved);
+                    const value = try sema.resolveConstValue(block, item_src, casted);
                     break :blk try range_set.add(value, value, item.src);
                 };
 
                 if (maybe_src) |previous_src| {
-                    return mod.fail(scope, item.src, "duplicate switch value", .{});
+                    return sema.mod.fail(&block.base, item.src, "duplicate switch value", .{});
                     // TODO notes "previous value is here" previous_src
                 }
             }
@@ -1939,54 +2138,54 @@ fn validateSwitch(mod: *Module, scope: *Scope, target: *Inst, inst: *zir.Inst.Sw
                 const end = try target.ty.maxInt(&arena, mod.getTarget());
                 if (try range_set.spans(start, end)) {
                     if (inst.positionals.special_prong == .@"else") {
-                        return mod.fail(scope, inst.base.src, "unreachable else prong, all cases already handled", .{});
+                        return sema.mod.fail(&block.base, inst.base.src, "unreachable else prong, all cases already handled", .{});
                     }
                     return;
                 }
             }
 
             if (inst.positionals.special_prong != .@"else") {
-                return mod.fail(scope, inst.base.src, "switch must handle all possibilities", .{});
+                return sema.mod.fail(&block.base, inst.base.src, "switch must handle all possibilities", .{});
             }
         },
         .Bool => {
             var true_count: u8 = 0;
             var false_count: u8 = 0;
             for (inst.positionals.items) |item| {
-                const resolved = try resolveInst(mod, scope, item);
-                const casted = try mod.coerce(scope, Type.initTag(.bool), resolved);
-                if ((try mod.resolveConstValue(scope, casted)).toBool()) {
+                const resolved = sema.resolveInst(block, item);
+                const casted = try sema.coerce(scope, Type.initTag(.bool), resolved);
+                if ((try sema.resolveConstValue(block, item_src, casted)).toBool()) {
                     true_count += 1;
                 } else {
                     false_count += 1;
                 }
 
                 if (true_count + false_count > 2) {
-                    return mod.fail(scope, item.src, "duplicate switch value", .{});
+                    return sema.mod.fail(&block.base, item.src, "duplicate switch value", .{});
                 }
             }
             if ((true_count + false_count < 2) and inst.positionals.special_prong != .@"else") {
-                return mod.fail(scope, inst.base.src, "switch must handle all possibilities", .{});
+                return sema.mod.fail(&block.base, inst.base.src, "switch must handle all possibilities", .{});
             }
             if ((true_count + false_count == 2) and inst.positionals.special_prong == .@"else") {
-                return mod.fail(scope, inst.base.src, "unreachable else prong, all cases already handled", .{});
+                return sema.mod.fail(&block.base, inst.base.src, "unreachable else prong, all cases already handled", .{});
             }
         },
         .EnumLiteral, .Void, .Fn, .Pointer, .Type => {
             if (inst.positionals.special_prong != .@"else") {
-                return mod.fail(scope, inst.base.src, "else prong required when switching on type '{}'", .{target.ty});
+                return sema.mod.fail(&block.base, inst.base.src, "else prong required when switching on type '{}'", .{target.ty});
             }
 
             var seen_values = std.HashMap(Value, usize, Value.hash, Value.eql, std.hash_map.DefaultMaxLoadPercentage).init(mod.gpa);
             defer seen_values.deinit();
 
             for (inst.positionals.items) |item| {
-                const resolved = try resolveInst(mod, scope, item);
-                const casted = try mod.coerce(scope, target.ty, resolved);
-                const val = try mod.resolveConstValue(scope, casted);
+                const resolved = sema.resolveInst(block, item);
+                const casted = try sema.coerce(scope, target.ty, resolved);
+                const val = try sema.resolveConstValue(block, item_src, casted);
 
                 if (try seen_values.fetchPut(val, item.src)) |prev| {
-                    return mod.fail(scope, item.src, "duplicate switch value", .{});
+                    return sema.mod.fail(&block.base, item.src, "duplicate switch value", .{});
                     // TODO notes "previous value here" prev.value
                 }
             }
@@ -2007,54 +2206,59 @@ fn validateSwitch(mod: *Module, scope: *Scope, target: *Inst, inst: *zir.Inst.Sw
         .ComptimeFloat,
         .Float,
         => {
-            return mod.fail(scope, target.src, "invalid switch target type '{}'", .{target.ty});
+            return sema.mod.fail(&block.base, target.src, "invalid switch target type '{}'", .{target.ty});
         },
     }
 }
 
-fn zirImport(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirImport(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const operand = try resolveConstString(mod, scope, inst.positionals.operand);
 
-    const file_scope = mod.analyzeImport(scope, inst.base.src, operand) catch |err| switch (err) {
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const src = inst_data.src();
+    const operand_src: LazySrcLoc = .{ .node_offset_builtin_call_arg0 = inst_data.src_node };
+    const operand = try sema.resolveConstString(block, operand_src, inst_data.operand);
+
+    const file_scope = sema.analyzeImport(block, src, operand) catch |err| switch (err) {
         error.ImportOutsidePkgPath => {
-            return mod.fail(scope, inst.base.src, "import of file outside package path: '{s}'", .{operand});
+            return sema.mod.fail(&block.base, src, "import of file outside package path: '{s}'", .{operand});
         },
         error.FileNotFound => {
-            return mod.fail(scope, inst.base.src, "unable to find '{s}'", .{operand});
+            return sema.mod.fail(&block.base, src, "unable to find '{s}'", .{operand});
         },
         else => {
             // TODO: make sure this gets retried and not cached
-            return mod.fail(scope, inst.base.src, "unable to open '{s}': {s}", .{ operand, @errorName(err) });
+            return sema.mod.fail(&block.base, src, "unable to open '{s}': {s}", .{ operand, @errorName(err) });
         },
     };
-    return mod.constType(scope, inst.base.src, file_scope.root_container.ty);
+    return sema.mod.constType(block.arena, src, file_scope.root_container.ty);
 }
 
-fn zirShl(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirShl(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.fail(scope, inst.base.src, "TODO implement zirShl", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement zirShl", .{});
 }
 
-fn zirShr(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirShr(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.fail(scope, inst.base.src, "TODO implement zirShr", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement zirShr", .{});
 }
 
-fn zirBitwise(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirBitwise(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const lhs = try resolveInst(mod, scope, inst.positionals.lhs);
-    const rhs = try resolveInst(mod, scope, inst.positionals.rhs);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const lhs = sema.resolveInst(bin_inst.lhs);
+    const rhs = sema.resolveInst(bin_inst.rhs);
 
     const instructions = &[_]*Inst{ lhs, rhs };
-    const resolved_type = try mod.resolvePeerTypes(scope, instructions);
-    const casted_lhs = try mod.coerce(scope, resolved_type, lhs);
-    const casted_rhs = try mod.coerce(scope, resolved_type, rhs);
+    const resolved_type = try sema.resolvePeerTypes(block, instructions);
+    const casted_lhs = try sema.coerce(scope, resolved_type, lhs);
+    const casted_rhs = try sema.coerce(scope, resolved_type, rhs);
 
     const scalar_type = if (resolved_type.zigTypeTag() == .Vector)
         resolved_type.elemType()
@@ -2065,14 +2269,14 @@ fn zirBitwise(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*In
 
     if (lhs.ty.zigTypeTag() == .Vector and rhs.ty.zigTypeTag() == .Vector) {
         if (lhs.ty.arrayLen() != rhs.ty.arrayLen()) {
-            return mod.fail(scope, inst.base.src, "vector length mismatch: {d} and {d}", .{
+            return sema.mod.fail(&block.base, inst.base.src, "vector length mismatch: {d} and {d}", .{
                 lhs.ty.arrayLen(),
                 rhs.ty.arrayLen(),
             });
         }
-        return mod.fail(scope, inst.base.src, "TODO implement support for vectors in zirBitwise", .{});
+        return sema.mod.fail(&block.base, inst.base.src, "TODO implement support for vectors in zirBitwise", .{});
     } else if (lhs.ty.zigTypeTag() == .Vector or rhs.ty.zigTypeTag() == .Vector) {
-        return mod.fail(scope, inst.base.src, "mixed scalar and vector operands to binary expression: '{}' and '{}'", .{
+        return sema.mod.fail(&block.base, inst.base.src, "mixed scalar and vector operands to binary expression: '{}' and '{}'", .{
             lhs.ty,
             rhs.ty,
         });
@@ -2081,22 +2285,22 @@ fn zirBitwise(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*In
     const is_int = scalar_tag == .Int or scalar_tag == .ComptimeInt;
 
     if (!is_int) {
-        return mod.fail(scope, inst.base.src, "invalid operands to binary bitwise expression: '{s}' and '{s}'", .{ @tagName(lhs.ty.zigTypeTag()), @tagName(rhs.ty.zigTypeTag()) });
+        return sema.mod.fail(&block.base, inst.base.src, "invalid operands to binary bitwise expression: '{s}' and '{s}'", .{ @tagName(lhs.ty.zigTypeTag()), @tagName(rhs.ty.zigTypeTag()) });
     }
 
     if (casted_lhs.value()) |lhs_val| {
         if (casted_rhs.value()) |rhs_val| {
             if (lhs_val.isUndef() or rhs_val.isUndef()) {
-                return mod.constInst(scope, inst.base.src, .{
+                return sema.mod.constInst(scope, inst.base.src, .{
                     .ty = resolved_type,
                     .val = Value.initTag(.undef),
                 });
             }
-            return mod.fail(scope, inst.base.src, "TODO implement comptime bitwise operations", .{});
+            return sema.mod.fail(&block.base, inst.base.src, "TODO implement comptime bitwise operations", .{});
         }
     }
 
-    const b = try mod.requireRuntimeBlock(scope, inst.base.src);
+    try sema.requireRuntimeBlock(block, inst.base.src);
     const ir_tag = switch (inst.base.tag) {
         .bit_and => Inst.Tag.bit_and,
         .bit_or => Inst.Tag.bit_or,
@@ -2107,35 +2311,36 @@ fn zirBitwise(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*In
     return mod.addBinOp(b, inst.base.src, scalar_type, ir_tag, casted_lhs, casted_rhs);
 }
 
-fn zirBitNot(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirBitNot(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.fail(scope, inst.base.src, "TODO implement zirBitNot", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement zirBitNot", .{});
 }
 
-fn zirArrayCat(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirArrayCat(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.fail(scope, inst.base.src, "TODO implement zirArrayCat", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement zirArrayCat", .{});
 }
 
-fn zirArrayMul(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirArrayMul(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    return mod.fail(scope, inst.base.src, "TODO implement zirArrayMul", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement zirArrayMul", .{});
 }
 
-fn zirArithmetic(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirArithmetic(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const lhs = try resolveInst(mod, scope, inst.positionals.lhs);
-    const rhs = try resolveInst(mod, scope, inst.positionals.rhs);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const lhs = sema.resolveInst(bin_inst.lhs);
+    const rhs = sema.resolveInst(bin_inst.rhs);
 
     const instructions = &[_]*Inst{ lhs, rhs };
-    const resolved_type = try mod.resolvePeerTypes(scope, instructions);
-    const casted_lhs = try mod.coerce(scope, resolved_type, lhs);
-    const casted_rhs = try mod.coerce(scope, resolved_type, rhs);
+    const resolved_type = try sema.resolvePeerTypes(block, instructions);
+    const casted_lhs = try sema.coerce(scope, resolved_type, lhs);
+    const casted_rhs = try sema.coerce(scope, resolved_type, rhs);
 
     const scalar_type = if (resolved_type.zigTypeTag() == .Vector)
         resolved_type.elemType()
@@ -2146,14 +2351,14 @@ fn zirArithmetic(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!
 
     if (lhs.ty.zigTypeTag() == .Vector and rhs.ty.zigTypeTag() == .Vector) {
         if (lhs.ty.arrayLen() != rhs.ty.arrayLen()) {
-            return mod.fail(scope, inst.base.src, "vector length mismatch: {d} and {d}", .{
+            return sema.mod.fail(&block.base, inst.base.src, "vector length mismatch: {d} and {d}", .{
                 lhs.ty.arrayLen(),
                 rhs.ty.arrayLen(),
             });
         }
-        return mod.fail(scope, inst.base.src, "TODO implement support for vectors in zirBinOp", .{});
+        return sema.mod.fail(&block.base, inst.base.src, "TODO implement support for vectors in zirBinOp", .{});
     } else if (lhs.ty.zigTypeTag() == .Vector or rhs.ty.zigTypeTag() == .Vector) {
-        return mod.fail(scope, inst.base.src, "mixed scalar and vector operands to binary expression: '{}' and '{}'", .{
+        return sema.mod.fail(&block.base, inst.base.src, "mixed scalar and vector operands to binary expression: '{}' and '{}'", .{
             lhs.ty,
             rhs.ty,
         });
@@ -2163,13 +2368,13 @@ fn zirArithmetic(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!
     const is_float = scalar_tag == .Float or scalar_tag == .ComptimeFloat;
 
     if (!is_int and !(is_float and floatOpAllowed(inst.base.tag))) {
-        return mod.fail(scope, inst.base.src, "invalid operands to binary expression: '{s}' and '{s}'", .{ @tagName(lhs.ty.zigTypeTag()), @tagName(rhs.ty.zigTypeTag()) });
+        return sema.mod.fail(&block.base, inst.base.src, "invalid operands to binary expression: '{s}' and '{s}'", .{ @tagName(lhs.ty.zigTypeTag()), @tagName(rhs.ty.zigTypeTag()) });
     }
 
     if (casted_lhs.value()) |lhs_val| {
         if (casted_rhs.value()) |rhs_val| {
             if (lhs_val.isUndef() or rhs_val.isUndef()) {
-                return mod.constInst(scope, inst.base.src, .{
+                return sema.mod.constInst(scope, inst.base.src, .{
                     .ty = resolved_type,
                     .val = Value.initTag(.undef),
                 });
@@ -2178,7 +2383,7 @@ fn zirArithmetic(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!
         }
     }
 
-    const b = try mod.requireRuntimeBlock(scope, inst.base.src);
+    try sema.requireRuntimeBlock(block, inst.base.src);
     const ir_tag: Inst.Tag = switch (inst.base.tag) {
         .add => .add,
         .addwrap => .addwrap,
@@ -2186,18 +2391,18 @@ fn zirArithmetic(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!
         .subwrap => .subwrap,
         .mul => .mul,
         .mulwrap => .mulwrap,
-        else => return mod.fail(scope, inst.base.src, "TODO implement arithmetic for operand '{s}''", .{@tagName(inst.base.tag)}),
+        else => return sema.mod.fail(&block.base, inst.base.src, "TODO implement arithmetic for operand '{s}''", .{@tagName(inst.base.tag)}),
     };
 
     return mod.addBinOp(b, inst.base.src, scalar_type, ir_tag, casted_lhs, casted_rhs);
 }
 
 /// Analyzes operands that are known at comptime
-fn analyzeInstComptimeOp(mod: *Module, scope: *Scope, res_type: Type, inst: *zir.Inst.BinOp, lhs_val: Value, rhs_val: Value) InnerError!*Inst {
+fn analyzeInstComptimeOp(sema: *Sema, block: *Scope.Block, res_type: Type, inst: zir.Inst.Index, lhs_val: Value, rhs_val: Value) InnerError!*Inst {
     // incase rhs is 0, simply return lhs without doing any calculations
     // TODO Once division is implemented we should throw an error when dividing by 0.
     if (rhs_val.compareWithZero(.eq)) {
-        return mod.constInst(scope, inst.base.src, .{
+        return sema.mod.constInst(scope, inst.base.src, .{
             .ty = res_type,
             .val = lhs_val,
         });
@@ -2207,89 +2412,117 @@ fn analyzeInstComptimeOp(mod: *Module, scope: *Scope, res_type: Type, inst: *zir
     const value = switch (inst.base.tag) {
         .add => blk: {
             const val = if (is_int)
-                try Module.intAdd(scope.arena(), lhs_val, rhs_val)
+                try Module.intAdd(block.arena, lhs_val, rhs_val)
             else
                 try mod.floatAdd(scope, res_type, inst.base.src, lhs_val, rhs_val);
             break :blk val;
         },
         .sub => blk: {
             const val = if (is_int)
-                try Module.intSub(scope.arena(), lhs_val, rhs_val)
+                try Module.intSub(block.arena, lhs_val, rhs_val)
             else
                 try mod.floatSub(scope, res_type, inst.base.src, lhs_val, rhs_val);
             break :blk val;
         },
-        else => return mod.fail(scope, inst.base.src, "TODO Implement arithmetic operand '{s}'", .{@tagName(inst.base.tag)}),
+        else => return sema.mod.fail(&block.base, inst.base.src, "TODO Implement arithmetic operand '{s}'", .{@tagName(inst.base.tag)}),
     };
 
     log.debug("{s}({}, {}) result: {}", .{ @tagName(inst.base.tag), lhs_val, rhs_val, value });
 
-    return mod.constInst(scope, inst.base.src, .{
+    return sema.mod.constInst(scope, inst.base.src, .{
         .ty = res_type,
         .val = value,
     });
 }
 
-fn zirDeref(mod: *Module, scope: *Scope, deref: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirDeref(sema: *Sema, block: *Scope.Block, deref: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const ptr = try resolveInst(mod, scope, deref.positionals.operand);
-    return mod.analyzeDeref(scope, deref.base.src, ptr, deref.positionals.operand.src);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_node;
+    const src = inst_data.src();
+    const ptr_src: LazySrcLoc = .{ .node_offset_deref_ptr = inst_data.src_node };
+    const ptr = sema.resolveInst(block, inst_data.operand);
+    return sema.analyzeDeref(block, src, ptr, ptr_src);
 }
 
-fn zirAsm(mod: *Module, scope: *Scope, assembly: *zir.Inst.Asm) InnerError!*Inst {
+fn zirAsm(
+    sema: *Sema,
+    block: *Scope.Block,
+    assembly: zir.Inst.Index,
+    is_volatile: bool,
+) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
 
-    const return_type = try resolveType(mod, scope, assembly.positionals.return_type);
-    const asm_source = try resolveConstString(mod, scope, assembly.positionals.asm_source);
-    const output = if (assembly.kw_args.output) |o| try resolveConstString(mod, scope, o) else null;
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const asm_source_src: LazySrcLoc = .{ .node_offset_asm_source = inst_data.src_node };
+    const ret_ty_src: LazySrcLoc = .{ .node_offset_asm_ret_ty = inst_data.src_node };
+    const extra = sema.code.extraData(zir.Inst.Asm, inst_data.payload_index);
+    const return_type = try sema.resolveType(block, ret_ty_src, extra.data.return_type);
+    const asm_source = try sema.resolveConstString(block, asm_source_src, extra.data.asm_source);
+
+    var extra_i = extra.end;
+    const output = if (extra.data.output != 0) blk: {
+        const name = sema.code.nullTerminatedString(sema.code.extra[extra_i]);
+        extra_i += 1;
+        break :blk .{
+            .name = name,
+            .inst = try sema.resolveInst(block, extra.data.output),
+        };
+    } else null;
 
-    const arena = scope.arena();
-    const inputs = try arena.alloc([]const u8, assembly.kw_args.inputs.len);
-    const clobbers = try arena.alloc([]const u8, assembly.kw_args.clobbers.len);
-    const args = try arena.alloc(*Inst, assembly.kw_args.args.len);
+    const args = try block.arena.alloc(*Inst, extra.data.args.len);
+    const inputs = try block.arena.alloc([]const u8, extra.data.args_len);
+    const clobbers = try block.arena.alloc([]const u8, extra.data.clobbers_len);
 
-    for (inputs) |*elem, i| {
-        elem.* = try arena.dupe(u8, assembly.kw_args.inputs[i]);
+    for (args) |*arg| {
+        const uncasted = sema.resolveInst(block, sema.code.extra[extra_i]);
+        extra_i += 1;
+        arg.* = try sema.coerce(block, Type.initTag(.usize), uncasted);
     }
-    for (clobbers) |*elem, i| {
-        elem.* = try arena.dupe(u8, assembly.kw_args.clobbers[i]);
+    for (inputs) |*name| {
+        name.* = sema.code.nullTerminatedString(sema.code.extra[extra_i]);
+        extra_i += 1;
     }
-    for (args) |*elem, i| {
-        const arg = try resolveInst(mod, scope, assembly.kw_args.args[i]);
-        elem.* = try mod.coerce(scope, Type.initTag(.usize), arg);
+    for (clobbers) |*name| {
+        name.* = sema.code.nullTerminatedString(sema.code.extra[extra_i]);
+        extra_i += 1;
     }
 
-    const b = try mod.requireRuntimeBlock(scope, assembly.base.src);
-    const inst = try b.arena.create(Inst.Assembly);
+    try sema.requireRuntimeBlock(block, src);
+    const inst = try block.arena.create(Inst.Assembly);
     inst.* = .{
         .base = .{
             .tag = .assembly,
             .ty = return_type,
-            .src = assembly.base.src,
+            .src = src,
         },
         .asm_source = asm_source,
-        .is_volatile = assembly.kw_args.@"volatile",
-        .output = output,
+        .is_volatile = is_volatile,
+        .output = if (output) |o| o.inst else null,
+        .output_name = if (output) |o| o.name else null,
         .inputs = inputs,
         .clobbers = clobbers,
         .args = args,
     };
-    try b.instructions.append(mod.gpa, &inst.base);
+    try block.instructions.append(mod.gpa, &inst.base);
     return &inst.base;
 }
 
 fn zirCmp(
-    mod: *Module,
-    scope: *Scope,
-    inst: *zir.Inst.BinOp,
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
     op: std.math.CompareOperator,
 ) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const lhs = try resolveInst(mod, scope, inst.positionals.lhs);
-    const rhs = try resolveInst(mod, scope, inst.positionals.rhs);
+
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const lhs = sema.resolveInst(bin_inst.lhs);
+    const rhs = sema.resolveInst(bin_inst.rhs);
 
     const is_equality_cmp = switch (op) {
         .eq, .neq => true,
@@ -2299,37 +2532,37 @@ fn zirCmp(
     const rhs_ty_tag = rhs.ty.zigTypeTag();
     if (is_equality_cmp and lhs_ty_tag == .Null and rhs_ty_tag == .Null) {
         // null == null, null != null
-        return mod.constBool(scope, inst.base.src, op == .eq);
+        return mod.constBool(block.arena, inst.base.src, op == .eq);
     } else if (is_equality_cmp and
         ((lhs_ty_tag == .Null and rhs_ty_tag == .Optional) or
         rhs_ty_tag == .Null and lhs_ty_tag == .Optional))
     {
         // comparing null with optionals
         const opt_operand = if (lhs_ty_tag == .Optional) lhs else rhs;
-        return mod.analyzeIsNull(scope, inst.base.src, opt_operand, op == .neq);
+        return sema.analyzeIsNull(block, inst.base.src, opt_operand, op == .neq);
     } else if (is_equality_cmp and
         ((lhs_ty_tag == .Null and rhs.ty.isCPtr()) or (rhs_ty_tag == .Null and lhs.ty.isCPtr())))
     {
-        return mod.fail(scope, inst.base.src, "TODO implement C pointer cmp", .{});
+        return sema.mod.fail(&block.base, inst.base.src, "TODO implement C pointer cmp", .{});
     } else if (lhs_ty_tag == .Null or rhs_ty_tag == .Null) {
         const non_null_type = if (lhs_ty_tag == .Null) rhs.ty else lhs.ty;
-        return mod.fail(scope, inst.base.src, "comparison of '{}' with null", .{non_null_type});
+        return sema.mod.fail(&block.base, inst.base.src, "comparison of '{}' with null", .{non_null_type});
     } else if (is_equality_cmp and
         ((lhs_ty_tag == .EnumLiteral and rhs_ty_tag == .Union) or
         (rhs_ty_tag == .EnumLiteral and lhs_ty_tag == .Union)))
     {
-        return mod.fail(scope, inst.base.src, "TODO implement equality comparison between a union's tag value and an enum literal", .{});
+        return sema.mod.fail(&block.base, inst.base.src, "TODO implement equality comparison between a union's tag value and an enum literal", .{});
     } else if (lhs_ty_tag == .ErrorSet and rhs_ty_tag == .ErrorSet) {
         if (!is_equality_cmp) {
-            return mod.fail(scope, inst.base.src, "{s} operator not allowed for errors", .{@tagName(op)});
+            return sema.mod.fail(&block.base, inst.base.src, "{s} operator not allowed for errors", .{@tagName(op)});
         }
         if (rhs.value()) |rval| {
             if (lhs.value()) |lval| {
                 // TODO optimisation oppurtunity: evaluate if std.mem.eql is faster with the names, or calling to Module.getErrorValue to get the values and then compare them is faster
-                return mod.constBool(scope, inst.base.src, std.mem.eql(u8, lval.castTag(.@"error").?.data.name, rval.castTag(.@"error").?.data.name) == (op == .eq));
+                return mod.constBool(block.arena, inst.base.src, std.mem.eql(u8, lval.castTag(.@"error").?.data.name, rval.castTag(.@"error").?.data.name) == (op == .eq));
             }
         }
-        const b = try mod.requireRuntimeBlock(scope, inst.base.src);
+        try sema.requireRuntimeBlock(block, inst.base.src);
         return mod.addBinOp(b, inst.base.src, Type.initTag(.bool), if (op == .eq) .cmp_eq else .cmp_neq, lhs, rhs);
     } else if (lhs.ty.isNumeric() and rhs.ty.isNumeric()) {
         // This operation allows any combination of integer and float types, regardless of the
@@ -2338,110 +2571,153 @@ fn zirCmp(
         return mod.cmpNumeric(scope, inst.base.src, lhs, rhs, op);
     } else if (lhs_ty_tag == .Type and rhs_ty_tag == .Type) {
         if (!is_equality_cmp) {
-            return mod.fail(scope, inst.base.src, "{s} operator not allowed for types", .{@tagName(op)});
+            return sema.mod.fail(&block.base, inst.base.src, "{s} operator not allowed for types", .{@tagName(op)});
         }
-        return mod.constBool(scope, inst.base.src, lhs.value().?.eql(rhs.value().?) == (op == .eq));
+        return mod.constBool(block.arena, inst.base.src, lhs.value().?.eql(rhs.value().?) == (op == .eq));
     }
-    return mod.fail(scope, inst.base.src, "TODO implement more cmp analysis", .{});
+    return sema.mod.fail(&block.base, inst.base.src, "TODO implement more cmp analysis", .{});
 }
 
-fn zirTypeof(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirTypeof(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const operand = try resolveInst(mod, scope, inst.positionals.operand);
-    return mod.constType(scope, inst.base.src, operand.ty);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const operand = sema.resolveInst(block, inst_data.operand);
+    return sema.mod.constType(block.arena, inst_data.src(), operand.ty);
 }
 
-fn zirTypeofPeer(mod: *Module, scope: *Scope, inst: *zir.Inst.TypeOfPeer) InnerError!*Inst {
+fn zirTypeofPeer(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    var insts_to_res = try mod.gpa.alloc(*ir.Inst, inst.positionals.items.len);
-    defer mod.gpa.free(insts_to_res);
-    for (inst.positionals.items) |item, i| {
-        insts_to_res[i] = try resolveInst(mod, scope, item);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].pl_node;
+    const src = inst_data.src();
+    const extra = sema.code.extraData(zir.Inst.MultiOp, inst_data.payload_index);
+
+    const inst_list = try mod.gpa.alloc(*ir.Inst, extra.data.operands_len);
+    defer mod.gpa.free(inst_list);
+
+    const src_list = try mod.gpa.alloc(LazySrcLoc, extra.data.operands_len);
+    defer mod.gpa.free(src_list);
+
+    for (sema.code.extra[extra.end..][0..extra.data.operands_len]) |arg_ref, i| {
+        inst_list[i] = sema.resolveInst(block, arg_ref);
+        src_list[i] = .{ .node_offset_builtin_call_argn = inst_data.src_node };
     }
-    const pt_res = try mod.resolvePeerTypes(scope, insts_to_res);
-    return mod.constType(scope, inst.base.src, pt_res);
+
+    const result_type = try sema.resolvePeerTypes(block, inst_list, src_list);
+    return sema.mod.constType(block.arena, src, result_type);
 }
 
-fn zirBoolNot(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirBoolNot(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const uncasted_operand = try resolveInst(mod, scope, inst.positionals.operand);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const src = inst_data.src();
+    const uncasted_operand = sema.resolveInst(block, inst_data.operand);
+
     const bool_type = Type.initTag(.bool);
-    const operand = try mod.coerce(scope, bool_type, uncasted_operand);
+    const operand = try sema.coerce(scope, bool_type, uncasted_operand);
     if (try mod.resolveDefinedValue(scope, operand)) |val| {
-        return mod.constBool(scope, inst.base.src, !val.toBool());
+        return mod.constBool(block.arena, src, !val.toBool());
     }
-    const b = try mod.requireRuntimeBlock(scope, inst.base.src);
-    return mod.addUnOp(b, inst.base.src, bool_type, .not, operand);
+    try sema.requireRuntimeBlock(block, src);
+    return block.addUnOp(src, bool_type, .not, operand);
 }
 
-fn zirBoolOp(mod: *Module, scope: *Scope, inst: *zir.Inst.BinOp) InnerError!*Inst {
+fn zirBoolOp(
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
+    comptime is_bool_or: bool,
+) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const bool_type = Type.initTag(.bool);
-    const uncasted_lhs = try resolveInst(mod, scope, inst.positionals.lhs);
-    const lhs = try mod.coerce(scope, bool_type, uncasted_lhs);
-    const uncasted_rhs = try resolveInst(mod, scope, inst.positionals.rhs);
-    const rhs = try mod.coerce(scope, bool_type, uncasted_rhs);
 
-    const is_bool_or = inst.base.tag == .bool_or;
+    const bool_type = Type.initTag(.bool);
+    const bin_inst = sema.code.instructions.items(.data)[inst].bin;
+    const uncasted_lhs = sema.resolveInst(bin_inst.lhs);
+    const lhs = try sema.coerce(scope, bool_type, uncasted_lhs);
+    const uncasted_rhs = sema.resolveInst(bin_inst.rhs);
+    const rhs = try sema.coerce(scope, bool_type, uncasted_rhs);
 
     if (lhs.value()) |lhs_val| {
         if (rhs.value()) |rhs_val| {
             if (is_bool_or) {
-                return mod.constBool(scope, inst.base.src, lhs_val.toBool() or rhs_val.toBool());
+                return mod.constBool(block.arena, inst.base.src, lhs_val.toBool() or rhs_val.toBool());
             } else {
-                return mod.constBool(scope, inst.base.src, lhs_val.toBool() and rhs_val.toBool());
+                return mod.constBool(block.arena, inst.base.src, lhs_val.toBool() and rhs_val.toBool());
             }
         }
     }
-    const b = try mod.requireRuntimeBlock(scope, inst.base.src);
-    return mod.addBinOp(b, inst.base.src, bool_type, if (is_bool_or) .bool_or else .bool_and, lhs, rhs);
+    try sema.requireRuntimeBlock(block, inst.base.src);
+    const tag: ir.Inst.Tag = if (is_bool_or) .bool_or else .bool_and;
+    return mod.addBinOp(b, inst.base.src, bool_type, tag, lhs, rhs);
 }
 
-fn zirIsNull(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp, invert_logic: bool) InnerError!*Inst {
+fn zirIsNull(
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
+    invert_logic: bool,
+) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const operand = try resolveInst(mod, scope, inst.positionals.operand);
-    return mod.analyzeIsNull(scope, inst.base.src, operand, invert_logic);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const src = inst_data.src();
+    const operand = sema.resolveInst(block, inst_data.operand);
+    return sema.analyzeIsNull(block, src, operand, invert_logic);
 }
 
-fn zirIsNullPtr(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp, invert_logic: bool) InnerError!*Inst {
+fn zirIsNullPtr(
+    sema: *Sema,
+    block: *Scope.Block,
+    inst: zir.Inst.Index,
+    invert_logic: bool,
+) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const ptr = try resolveInst(mod, scope, inst.positionals.operand);
-    const loaded = try mod.analyzeDeref(scope, inst.base.src, ptr, ptr.src);
-    return mod.analyzeIsNull(scope, inst.base.src, loaded, invert_logic);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const src = inst_data.src();
+    const ptr = sema.resolveInst(block, inst_data.operand);
+    const loaded = try sema.analyzeDeref(block, src, ptr, src);
+    return sema.analyzeIsNull(block, src, loaded, invert_logic);
 }
 
-fn zirIsErr(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirIsErr(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const operand = try resolveInst(mod, scope, inst.positionals.operand);
-    return mod.analyzeIsErr(scope, inst.base.src, operand);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const operand = sema.resolveInst(block, inst_data.operand);
+    return mod.analyzeIsErr(scope, inst_data.src(), operand);
 }
 
-fn zirIsErrPtr(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+fn zirIsErrPtr(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const ptr = try resolveInst(mod, scope, inst.positionals.operand);
-    const loaded = try mod.analyzeDeref(scope, inst.base.src, ptr, ptr.src);
-    return mod.analyzeIsErr(scope, inst.base.src, loaded);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].un_tok;
+    const src = inst_data.src();
+    const ptr = sema.resolveInst(block, inst_data.operand);
+    const loaded = try sema.analyzeDeref(block, src, ptr, src);
+    return mod.analyzeIsErr(scope, src, loaded);
 }
 
-fn zirCondbr(mod: *Module, scope: *Scope, inst: *zir.Inst.CondBr) InnerError!*Inst {
+fn zirCondbr(sema: *Sema, parent_block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const uncasted_cond = try resolveInst(mod, scope, inst.positionals.condition);
-    const cond = try mod.coerce(scope, Type.initTag(.bool), uncasted_cond);
 
-    const parent_block = scope.cast(Scope.Block).?;
+    const uncasted_cond = sema.resolveInst(block, inst.positionals.condition);
+    const cond = try sema.coerce(scope, Type.initTag(.bool), uncasted_cond);
 
     if (try mod.resolveDefinedValue(scope, cond)) |cond_val| {
         const body = if (cond_val.toBool()) &inst.positionals.then_body else &inst.positionals.else_body;
-        try analyzeBody(mod, parent_block, body.*);
+        try sema.body(parent_block, body.*);
         return mod.constNoReturn(scope, inst.base.src);
     }
 
@@ -2458,7 +2734,7 @@ fn zirCondbr(mod: *Module, scope: *Scope, inst: *zir.Inst.CondBr) InnerError!*In
         .branch_quota = parent_block.branch_quota,
     };
     defer true_block.instructions.deinit(mod.gpa);
-    try analyzeBody(mod, &true_block, inst.positionals.then_body);
+    try sema.body(&true_block, inst.positionals.then_body);
 
     var false_block: Scope.Block = .{
         .parent = parent_block,
@@ -2473,68 +2749,37 @@ fn zirCondbr(mod: *Module, scope: *Scope, inst: *zir.Inst.CondBr) InnerError!*In
         .branch_quota = parent_block.branch_quota,
     };
     defer false_block.instructions.deinit(mod.gpa);
-    try analyzeBody(mod, &false_block, inst.positionals.else_body);
+    try sema.body(&false_block, inst.positionals.else_body);
 
-    const then_body: ir.Body = .{ .instructions = try scope.arena().dupe(*Inst, true_block.instructions.items) };
-    const else_body: ir.Body = .{ .instructions = try scope.arena().dupe(*Inst, false_block.instructions.items) };
+    const then_body: ir.Body = .{ .instructions = try block.arena.dupe(*Inst, true_block.instructions.items) };
+    const else_body: ir.Body = .{ .instructions = try block.arena.dupe(*Inst, false_block.instructions.items) };
     return mod.addCondBr(parent_block, inst.base.src, cond, then_body, else_body);
 }
 
 fn zirUnreachable(
-    mod: *Module,
-    scope: *Scope,
-    unreach: *zir.Inst.NoOp,
+    sema: *Sema,
+    block: *Scope.Block,
+    zir_index: zir.Inst.Index,
     safety_check: bool,
 ) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const b = try mod.requireRuntimeBlock(scope, unreach.base.src);
+
+    try sema.requireRuntimeBlock(block, zir_index.base.src);
     // TODO Add compile error for @optimizeFor occurring too late in a scope.
-    if (safety_check and mod.wantSafety(scope)) {
-        return mod.safetyPanic(b, unreach.base.src, .unreach);
+    if (safety_check and block.wantSafety()) {
+        return mod.safetyPanic(b, zir_index.base.src, .unreach);
     } else {
-        return mod.addNoOp(b, unreach.base.src, Type.initTag(.noreturn), .unreach);
+        return block.addNoOp(zir_index.base.src, Type.initTag(.noreturn), .unreach);
     }
 }
 
-fn zirReturn(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
-    const tracy = trace(@src());
-    defer tracy.end();
-    const operand = try resolveInst(mod, scope, inst.positionals.operand);
-    const b = try mod.requireFunctionBlock(scope, inst.base.src);
-
-    if (b.inlining) |inlining| {
-        // We are inlining a function call; rewrite the `ret` as a `break`.
-        try inlining.merges.results.append(mod.gpa, operand);
-        const br = try mod.addBr(b, inst.base.src, inlining.merges.block_inst, operand);
-        return &br.base;
-    }
-
-    return mod.addUnOp(b, inst.base.src, Type.initTag(.noreturn), .ret, operand);
+fn zirRetTok(sema: *Sema, block: *Scope.Block, zir_inst: zir.Inst.Index) InnerError!*Inst {
+    @compileError("TODO");
 }
 
-fn zirReturnVoid(mod: *Module, scope: *Scope, inst: *zir.Inst.NoOp) InnerError!*Inst {
-    const tracy = trace(@src());
-    defer tracy.end();
-    const b = try mod.requireFunctionBlock(scope, inst.base.src);
-    if (b.inlining) |inlining| {
-        // We are inlining a function call; rewrite the `retvoid` as a `breakvoid`.
-        const void_inst = try mod.constVoid(scope, inst.base.src);
-        try inlining.merges.results.append(mod.gpa, void_inst);
-        const br = try mod.addBr(b, inst.base.src, inlining.merges.block_inst, void_inst);
-        return &br.base;
-    }
-
-    if (b.func) |func| {
-        // Need to emit a compile error if returning void is not allowed.
-        const void_inst = try mod.constVoid(scope, inst.base.src);
-        const fn_ty = func.owner_decl.typed_value.most_recent.typed_value.ty;
-        const casted_void = try mod.coerce(scope, fn_ty.fnReturnType(), void_inst);
-        if (casted_void.ty.zigTypeTag() != .Void) {
-            return mod.addUnOp(b, inst.base.src, Type.initTag(.noreturn), .ret, casted_void);
-        }
-    }
-    return mod.addNoOp(b, inst.base.src, Type.initTag(.noreturn), .retvoid);
+fn zirRetNode(sema: *Sema, block: *Scope.Block, zir_inst: zir.Inst.Index) InnerError!*Inst {
+    @compileError("TODO");
 }
 
 fn floatOpAllowed(tag: zir.Inst.Tag) bool {
@@ -2545,53 +2790,1080 @@ fn floatOpAllowed(tag: zir.Inst.Tag) bool {
     };
 }
 
-fn zirSimplePtrType(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp, mutable: bool, size: std.builtin.TypeInfo.Pointer.Size) InnerError!*Inst {
+fn zirPtrTypeSimple(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    const elem_type = try resolveType(mod, scope, inst.positionals.operand);
-    const ty = try mod.simplePtrType(scope, inst.base.src, elem_type, mutable, size);
-    return mod.constType(scope, inst.base.src, ty);
+
+    const inst_data = sema.code.instructions.items(.data)[inst].ptr_type_simple;
+    const elem_type = try sema.resolveType(block, .unneeded, inst_data.elem_type);
+    const ty = try sema.mod.ptrType(
+        block.arena,
+        elem_type,
+        null,
+        0,
+        0,
+        0,
+        inst_data.is_mutable,
+        inst_data.is_allowzero,
+        inst_data.is_volatile,
+        inst_data.size,
+    );
+    return sema.mod.constType(block.arena, .unneeded, ty);
 }
 
-fn zirPtrType(mod: *Module, scope: *Scope, inst: *zir.Inst.PtrType) InnerError!*Inst {
+fn zirPtrType(sema: *Sema, block: *Scope.Block, inst: zir.Inst.Index) InnerError!*Inst {
     const tracy = trace(@src());
     defer tracy.end();
-    // TODO lazy values
-    const @"align" = if (inst.kw_args.@"align") |some|
-        @truncate(u32, try resolveInt(mod, scope, some, Type.initTag(.u32)))
-    else
-        0;
-    const bit_offset = if (inst.kw_args.align_bit_start) |some|
-        @truncate(u16, try resolveInt(mod, scope, some, Type.initTag(.u16)))
-    else
-        0;
-    const host_size = if (inst.kw_args.align_bit_end) |some|
-        @truncate(u16, try resolveInt(mod, scope, some, Type.initTag(.u16)))
-    else
-        0;
 
-    if (host_size != 0 and bit_offset >= host_size * 8)
-        return mod.fail(scope, inst.base.src, "bit offset starts after end of host integer", .{});
+    const inst_data = sema.code.instructions.items(.data)[inst].ptr_type;
+    const extra = sema.code.extraData(zir.Inst.PtrType, inst_data.payload_index);
 
-    const sentinel = if (inst.kw_args.sentinel) |some|
-        (try resolveInstConst(mod, scope, some)).val
-    else
-        null;
+    var extra_i = extra.end;
+
+    const sentinel = if (inst_data.flags.has_sentinel) blk: {
+        const ref = sema.code.extra[extra_i];
+        extra_i += 1;
+        break :blk (try sema.resolveInstConst(block, .unneeded, ref)).val;
+    } else null;
+
+    const abi_align = if (inst_data.flags.has_align) blk: {
+        const ref = sema.code.extra[extra_i];
+        extra_i += 1;
+        break :blk try sema.resolveAlreadyCoercedInt(block, .unneeded, ref, u32);
+    } else 0;
+
+    const bit_start = if (inst_data.flags.has_bit_start) blk: {
+        const ref = sema.code.extra[extra_i];
+        extra_i += 1;
+        break :blk try sema.resolveAlreadyCoercedInt(block, .unneeded, ref, u16);
+    } else 0;
 
-    const elem_type = try resolveType(mod, scope, inst.positionals.child_type);
+    const bit_end = if (inst_data.flags.has_bit_end) blk: {
+        const ref = sema.code.extra[extra_i];
+        extra_i += 1;
+        break :blk try sema.resolveAlreadyCoercedInt(block, .unneeded, ref, u16);
+    } else 0;
+
+    if (bit_end != 0 and bit_offset >= bit_end * 8)
+        return sema.mod.fail(&block.base, inst.base.src, "bit offset starts after end of host integer", .{});
+
+    const elem_type = try sema.resolveType(block, extra.data.elem_type);
 
     const ty = try mod.ptrType(
         scope,
-        inst.base.src,
         elem_type,
         sentinel,
-        @"align",
-        bit_offset,
-        host_size,
-        inst.kw_args.mutable,
-        inst.kw_args.@"allowzero",
-        inst.kw_args.@"volatile",
-        inst.kw_args.size,
+        abi_align,
+        bit_start,
+        bit_end,
+        inst_data.flags.is_mutable,
+        inst_data.flags.is_allowzero,
+        inst_data.flags.is_volatile,
+        inst_data.size,
     );
-    return mod.constType(scope, inst.base.src, ty);
+    return sema.mod.constType(block.arena, .unneeded, ty);
+}
+
+fn requireFunctionBlock(sema: *Sema, block: *Scope.Block, src: LazySrcLoc) !void {
+    if (sema.func == null) {
+        return sema.mod.fail(&block.base, src, "instruction illegal outside function body", .{});
+    }
+}
+
+fn requireRuntimeBlock(sema: *Sema, block: *Scope.Block, src: LazySrcLoc) !void {
+    try sema.requireFunctionBlock(scope, src);
+    if (block.is_comptime) {
+        return sema.mod.fail(&block.base, src, "unable to resolve comptime value", .{});
+    }
+}
+
+fn validateVarType(sema: *Module, block: *Scope.Block, src: LazySrcLoc, ty: Type) !void {
+    if (!ty.isValidVarType(false)) {
+        return mod.fail(&block.base, src, "variable of type '{}' must be const or comptime", .{ty});
+    }
+}
+
+pub const PanicId = enum {
+    unreach,
+    unwrap_null,
+    unwrap_errunion,
+};
+
+fn addSafetyCheck(sema: *Sema, parent_block: *Scope.Block, ok: *Inst, panic_id: PanicId) !void {
+    const block_inst = try parent_block.arena.create(Inst.Block);
+    block_inst.* = .{
+        .base = .{
+            .tag = Inst.Block.base_tag,
+            .ty = Type.initTag(.void),
+            .src = ok.src,
+        },
+        .body = .{
+            .instructions = try parent_block.arena.alloc(*Inst, 1), // Only need space for the condbr.
+        },
+    };
+
+    const ok_body: ir.Body = .{
+        .instructions = try parent_block.arena.alloc(*Inst, 1), // Only need space for the br_void.
+    };
+    const br_void = try parent_block.arena.create(Inst.BrVoid);
+    br_void.* = .{
+        .base = .{
+            .tag = .br_void,
+            .ty = Type.initTag(.noreturn),
+            .src = ok.src,
+        },
+        .block = block_inst,
+    };
+    ok_body.instructions[0] = &br_void.base;
+
+    var fail_block: Scope.Block = .{
+        .parent = parent_block,
+        .inst_map = parent_block.inst_map,
+        .func = parent_block.func,
+        .owner_decl = parent_block.owner_decl,
+        .src_decl = parent_block.src_decl,
+        .instructions = .{},
+        .arena = parent_block.arena,
+        .inlining = parent_block.inlining,
+        .is_comptime = parent_block.is_comptime,
+        .branch_quota = parent_block.branch_quota,
+    };
+
+    defer fail_block.instructions.deinit(mod.gpa);
+
+    _ = try mod.safetyPanic(&fail_block, ok.src, panic_id);
+
+    const fail_body: ir.Body = .{ .instructions = try parent_block.arena.dupe(*Inst, fail_block.instructions.items) };
+
+    const condbr = try parent_block.arena.create(Inst.CondBr);
+    condbr.* = .{
+        .base = .{
+            .tag = .condbr,
+            .ty = Type.initTag(.noreturn),
+            .src = ok.src,
+        },
+        .condition = ok,
+        .then_body = ok_body,
+        .else_body = fail_body,
+    };
+    block_inst.body.instructions[0] = &condbr.base;
+
+    try parent_block.instructions.append(mod.gpa, &block_inst.base);
+}
+
+fn safetyPanic(sema: *Sema, block: *Scope.Block, src: LazySrcLoc, panic_id: PanicId) !*Inst {
+    // TODO Once we have a panic function to call, call it here instead of breakpoint.
+    _ = try mod.addNoOp(block, src, Type.initTag(.void), .breakpoint);
+    return mod.addNoOp(block, src, Type.initTag(.noreturn), .unreach);
+}
+
+fn emitBackwardBranch(sema: *Sema, block: *Scope.Block, src: LazySrcLoc) !void {
+    const shared = block.inlining.?.shared;
+    shared.branch_count += 1;
+    if (shared.branch_count > block.branch_quota.*) {
+        // TODO show the "called from here" stack
+        return mod.fail(&block.base, src, "evaluation exceeded {d} backwards branches", .{
+            block.branch_quota.*,
+        });
+    }
+}
+
+fn namedFieldPtr(
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    object_ptr: *Inst,
+    field_name: []const u8,
+    field_name_src: LazySrcLoc,
+) InnerError!*Inst {
+    const elem_ty = switch (object_ptr.ty.zigTypeTag()) {
+        .Pointer => object_ptr.ty.elemType(),
+        else => return sema.mod.fail(&block.base, object_ptr.src, "expected pointer, found '{}'", .{object_ptr.ty}),
+    };
+    switch (elem_ty.zigTypeTag()) {
+        .Array => {
+            if (mem.eql(u8, field_name, "len")) {
+                return mod.constInst(scope, src, .{
+                    .ty = Type.initTag(.single_const_pointer_to_comptime_int),
+                    .val = try Value.Tag.ref_val.create(
+                        scope.arena(),
+                        try Value.Tag.int_u64.create(scope.arena(), elem_ty.arrayLen()),
+                    ),
+                });
+            } else {
+                return mod.fail(
+                    scope,
+                    field_name_src,
+                    "no member named '{s}' in '{}'",
+                    .{ field_name, elem_ty },
+                );
+            }
+        },
+        .Pointer => {
+            const ptr_child = elem_ty.elemType();
+            switch (ptr_child.zigTypeTag()) {
+                .Array => {
+                    if (mem.eql(u8, field_name, "len")) {
+                        return mod.constInst(scope, src, .{
+                            .ty = Type.initTag(.single_const_pointer_to_comptime_int),
+                            .val = try Value.Tag.ref_val.create(
+                                scope.arena(),
+                                try Value.Tag.int_u64.create(scope.arena(), ptr_child.arrayLen()),
+                            ),
+                        });
+                    } else {
+                        return mod.fail(
+                            scope,
+                            field_name_src,
+                            "no member named '{s}' in '{}'",
+                            .{ field_name, elem_ty },
+                        );
+                    }
+                },
+                else => {},
+            }
+        },
+        .Type => {
+            _ = try sema.resolveConstValue(scope, object_ptr.src, object_ptr);
+            const result = try sema.analyzeDeref(block, src, object_ptr, object_ptr.src);
+            const val = result.value().?;
+            const child_type = try val.toType(scope.arena());
+            switch (child_type.zigTypeTag()) {
+                .ErrorSet => {
+                    var name: []const u8 = undefined;
+                    // TODO resolve inferred error sets
+                    if (val.castTag(.error_set)) |payload|
+                        name = (payload.data.fields.getEntry(field_name) orelse return sema.mod.fail(&block.base, src, "no error named '{s}' in '{}'", .{ field_name, child_type })).key
+                    else
+                        name = (try mod.getErrorValue(field_name)).key;
+
+                    const result_type = if (child_type.tag() == .anyerror)
+                        try Type.Tag.error_set_single.create(scope.arena(), name)
+                    else
+                        child_type;
+
+                    return mod.constInst(scope, src, .{
+                        .ty = try mod.simplePtrType(scope.arena(), result_type, false, .One),
+                        .val = try Value.Tag.ref_val.create(
+                            scope.arena(),
+                            try Value.Tag.@"error".create(scope.arena(), .{
+                                .name = name,
+                            }),
+                        ),
+                    });
+                },
+                .Struct => {
+                    const container_scope = child_type.getContainerScope();
+                    if (mod.lookupDeclName(&container_scope.base, field_name)) |decl| {
+                        // TODO if !decl.is_pub and inDifferentFiles() "{} is private"
+                        return sema.analyzeDeclRef(block, src, decl);
+                    }
+
+                    if (container_scope.file_scope == mod.root_scope) {
+                        return sema.mod.fail(&block.base, src, "root source file has no member called '{s}'", .{field_name});
+                    } else {
+                        return sema.mod.fail(&block.base, src, "container '{}' has no member called '{s}'", .{ child_type, field_name });
+                    }
+                },
+                else => return sema.mod.fail(&block.base, src, "type '{}' does not support field access", .{child_type}),
+            }
+        },
+        else => {},
+    }
+    return sema.mod.fail(&block.base, src, "type '{}' does not support field access", .{elem_ty});
+}
+
+fn elemPtr(
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    array_ptr: *Inst,
+    elem_index: *Inst,
+    elem_index_src: LazySrcLoc,
+) InnerError!*Inst {
+    const elem_ty = switch (array_ptr.ty.zigTypeTag()) {
+        .Pointer => array_ptr.ty.elemType(),
+        else => return sema.mod.fail(&block.base, array_ptr.src, "expected pointer, found '{}'", .{array_ptr.ty}),
+    };
+    if (!elem_ty.isIndexable()) {
+        return sema.mod.fail(&block.base, src, "array access of non-array type '{}'", .{elem_ty});
+    }
+
+    if (elem_ty.isSinglePointer() and elem_ty.elemType().zigTypeTag() == .Array) {
+        // we have to deref the ptr operand to get the actual array pointer
+        const array_ptr_deref = try sema.analyzeDeref(block, src, array_ptr, array_ptr.src);
+        if (array_ptr_deref.value()) |array_ptr_val| {
+            if (elem_index.value()) |index_val| {
+                // Both array pointer and index are compile-time known.
+                const index_u64 = index_val.toUnsignedInt();
+                // @intCast here because it would have been impossible to construct a value that
+                // required a larger index.
+                const elem_ptr = try array_ptr_val.elemPtr(scope.arena(), @intCast(usize, index_u64));
+                const pointee_type = elem_ty.elemType().elemType();
+
+                return mod.constInst(scope, src, .{
+                    .ty = try Type.Tag.single_const_pointer.create(scope.arena(), pointee_type),
+                    .val = elem_ptr,
+                });
+            }
+        }
+    }
+
+    return sema.mod.fail(&block.base, src, "TODO implement more analyze elemptr", .{});
+}
+
+fn coerce(sema: *Sema, block: *Scope.Block, dest_type: Type, inst: *Inst) InnerError!*Inst {
+    if (dest_type.tag() == .var_args_param) {
+        return sema.coerceVarArgParam(scope, inst);
+    }
+    // If the types are the same, we can return the operand.
+    if (dest_type.eql(inst.ty))
+        return inst;
+
+    const in_memory_result = coerceInMemoryAllowed(dest_type, inst.ty);
+    if (in_memory_result == .ok) {
+        return sema.bitcast(scope, dest_type, inst);
+    }
+
+    // undefined to anything
+    if (inst.value()) |val| {
+        if (val.isUndef() or inst.ty.zigTypeTag() == .Undefined) {
+            return mod.constInst(scope.arena(), inst.src, .{ .ty = dest_type, .val = val });
+        }
+    }
+    assert(inst.ty.zigTypeTag() != .Undefined);
+
+    // null to ?T
+    if (dest_type.zigTypeTag() == .Optional and inst.ty.zigTypeTag() == .Null) {
+        return mod.constInst(scope.arena(), inst.src, .{ .ty = dest_type, .val = Value.initTag(.null_value) });
+    }
+
+    // T to ?T
+    if (dest_type.zigTypeTag() == .Optional) {
+        var buf: Type.Payload.ElemType = undefined;
+        const child_type = dest_type.optionalChild(&buf);
+        if (child_type.eql(inst.ty)) {
+            return mod.wrapOptional(scope, dest_type, inst);
+        } else if (try sema.coerceNum(scope, child_type, inst)) |some| {
+            return mod.wrapOptional(scope, dest_type, some);
+        }
+    }
+
+    // T to E!T or E to E!T
+    if (dest_type.tag() == .error_union) {
+        return try mod.wrapErrorUnion(scope, dest_type, inst);
+    }
+
+    // Coercions where the source is a single pointer to an array.
+    src_array_ptr: {
+        if (!inst.ty.isSinglePointer()) break :src_array_ptr;
+        const array_type = inst.ty.elemType();
+        if (array_type.zigTypeTag() != .Array) break :src_array_ptr;
+        const array_elem_type = array_type.elemType();
+        if (inst.ty.isConstPtr() and !dest_type.isConstPtr()) break :src_array_ptr;
+        if (inst.ty.isVolatilePtr() and !dest_type.isVolatilePtr()) break :src_array_ptr;
+
+        const dst_elem_type = dest_type.elemType();
+        switch (coerceInMemoryAllowed(dst_elem_type, array_elem_type)) {
+            .ok => {},
+            .no_match => break :src_array_ptr,
+        }
+
+        switch (dest_type.ptrSize()) {
+            .Slice => {
+                // *[N]T to []T
+                return sema.coerceArrayPtrToSlice(scope, dest_type, inst);
+            },
+            .C => {
+                // *[N]T to [*c]T
+                return sema.coerceArrayPtrToMany(scope, dest_type, inst);
+            },
+            .Many => {
+                // *[N]T to [*]T
+                // *[N:s]T to [*:s]T
+                const src_sentinel = array_type.sentinel();
+                const dst_sentinel = dest_type.sentinel();
+                if (src_sentinel == null and dst_sentinel == null)
+                    return sema.coerceArrayPtrToMany(scope, dest_type, inst);
+
+                if (src_sentinel) |src_s| {
+                    if (dst_sentinel) |dst_s| {
+                        if (src_s.eql(dst_s)) {
+                            return sema.coerceArrayPtrToMany(scope, dest_type, inst);
+                        }
+                    }
+                }
+            },
+            .One => {},
+        }
+    }
+
+    // comptime known number to other number
+    if (try sema.coerceNum(scope, dest_type, inst)) |some|
+        return some;
+
+    // integer widening
+    if (inst.ty.zigTypeTag() == .Int and dest_type.zigTypeTag() == .Int) {
+        assert(inst.value() == null); // handled above
+
+        const src_info = inst.ty.intInfo(mod.getTarget());
+        const dst_info = dest_type.intInfo(mod.getTarget());
+        if ((src_info.signedness == dst_info.signedness and dst_info.bits >= src_info.bits) or
+            // small enough unsigned ints can get casted to large enough signed ints
+            (src_info.signedness == .signed and dst_info.signedness == .unsigned and dst_info.bits > src_info.bits))
+        {
+            try sema.requireRuntimeBlock(block, inst.src);
+            return mod.addUnOp(b, inst.src, dest_type, .intcast, inst);
+        }
+    }
+
+    // float widening
+    if (inst.ty.zigTypeTag() == .Float and dest_type.zigTypeTag() == .Float) {
+        assert(inst.value() == null); // handled above
+
+        const src_bits = inst.ty.floatBits(mod.getTarget());
+        const dst_bits = dest_type.floatBits(mod.getTarget());
+        if (dst_bits >= src_bits) {
+            try sema.requireRuntimeBlock(block, inst.src);
+            return mod.addUnOp(b, inst.src, dest_type, .floatcast, inst);
+        }
+    }
+
+    return sema.mod.fail(&block.base, inst.src, "expected {}, found {}", .{ dest_type, inst.ty });
+}
+
+const InMemoryCoercionResult = enum {
+    ok,
+    no_match,
+};
+
+fn coerceInMemoryAllowed(dest_type: Type, src_type: Type) InMemoryCoercionResult {
+    if (dest_type.eql(src_type))
+        return .ok;
+
+    // TODO: implement more of this function
+
+    return .no_match;
+}
+
+fn coerceNum(sema: *Sema, block: *Scope.Block, dest_type: Type, inst: *Inst) InnerError!?*Inst {
+    const val = inst.value() orelse return null;
+    const src_zig_tag = inst.ty.zigTypeTag();
+    const dst_zig_tag = dest_type.zigTypeTag();
+
+    if (dst_zig_tag == .ComptimeInt or dst_zig_tag == .Int) {
+        if (src_zig_tag == .Float or src_zig_tag == .ComptimeFloat) {
+            if (val.floatHasFraction()) {
+                return sema.mod.fail(&block.base, inst.src, "fractional component prevents float value {} from being casted to type '{}'", .{ val, inst.ty });
+            }
+            return sema.mod.fail(&block.base, inst.src, "TODO float to int", .{});
+        } else if (src_zig_tag == .Int or src_zig_tag == .ComptimeInt) {
+            if (!val.intFitsInType(dest_type, mod.getTarget())) {
+                return sema.mod.fail(&block.base, inst.src, "type {} cannot represent integer value {}", .{ inst.ty, val });
+            }
+            return mod.constInst(scope, inst.src, .{ .ty = dest_type, .val = val });
+        }
+    } else if (dst_zig_tag == .ComptimeFloat or dst_zig_tag == .Float) {
+        if (src_zig_tag == .Float or src_zig_tag == .ComptimeFloat) {
+            const res = val.floatCast(scope.arena(), dest_type, mod.getTarget()) catch |err| switch (err) {
+                error.Overflow => return mod.fail(
+                    scope,
+                    inst.src,
+                    "cast of value {} to type '{}' loses information",
+                    .{ val, dest_type },
+                ),
+                error.OutOfMemory => return error.OutOfMemory,
+            };
+            return mod.constInst(scope, inst.src, .{ .ty = dest_type, .val = res });
+        } else if (src_zig_tag == .Int or src_zig_tag == .ComptimeInt) {
+            return sema.mod.fail(&block.base, inst.src, "TODO int to float", .{});
+        }
+    }
+    return null;
+}
+
+fn coerceVarArgParam(sema: *Sema, block: *Scope.Block, inst: *Inst) !*Inst {
+    switch (inst.ty.zigTypeTag()) {
+        .ComptimeInt, .ComptimeFloat => return sema.mod.fail(&block.base, inst.src, "integer and float literals in var args function must be casted", .{}),
+        else => {},
+    }
+    // TODO implement more of this function.
+    return inst;
+}
+
+fn storePtr(sema: *Sema, block: *Scope.Block, src: LazySrcLoc, ptr: *Inst, uncasted_value: *Inst) !*Inst {
+    if (ptr.ty.isConstPtr())
+        return sema.mod.fail(&block.base, src, "cannot assign to constant", .{});
+
+    const elem_ty = ptr.ty.elemType();
+    const value = try sema.coerce(scope, elem_ty, uncasted_value);
+    if (elem_ty.onePossibleValue() != null)
+        return sema.mod.constVoid(block.arena, .unneeded);
+
+    // TODO handle comptime pointer writes
+    // TODO handle if the element type requires comptime
+
+    try sema.requireRuntimeBlock(block, src);
+    return mod.addBinOp(b, src, Type.initTag(.void), .store, ptr, value);
+}
+
+fn bitcast(sema: *Sema, block: *Scope.Block, dest_type: Type, inst: *Inst) !*Inst {
+    if (inst.value()) |val| {
+        // Keep the comptime Value representation; take the new type.
+        return mod.constInst(scope, inst.src, .{ .ty = dest_type, .val = val });
+    }
+    // TODO validate the type size and other compile errors
+    try sema.requireRuntimeBlock(block, inst.src);
+    return mod.addUnOp(b, inst.src, dest_type, .bitcast, inst);
+}
+
+fn coerceArrayPtrToSlice(sema: *Sema, block: *Scope.Block, dest_type: Type, inst: *Inst) !*Inst {
+    if (inst.value()) |val| {
+        // The comptime Value representation is compatible with both types.
+        return mod.constInst(scope, inst.src, .{ .ty = dest_type, .val = val });
+    }
+    return sema.mod.fail(&block.base, inst.src, "TODO implement coerceArrayPtrToSlice runtime instruction", .{});
+}
+
+fn coerceArrayPtrToMany(sema: *Sema, block: *Scope.Block, dest_type: Type, inst: *Inst) !*Inst {
+    if (inst.value()) |val| {
+        // The comptime Value representation is compatible with both types.
+        return mod.constInst(scope, inst.src, .{ .ty = dest_type, .val = val });
+    }
+    return sema.mod.fail(&block.base, inst.src, "TODO implement coerceArrayPtrToMany runtime instruction", .{});
+}
+
+fn analyzeDeclVal(sema: *Sema, block: *Scope.Block, src: LazySrcLoc, decl: *Decl) InnerError!*Inst {
+    const decl_ref = try sema.analyzeDeclRef(block, src, decl);
+    return sema.analyzeDeref(block, src, decl_ref, src);
+}
+
+fn analyzeDeclRef(sema: *Sema, block: *Scope.Block, src: LazySrcLoc, decl: *Decl) InnerError!*Inst {
+    const scope_decl = scope.ownerDecl().?;
+    try mod.declareDeclDependency(scope_decl, decl);
+    mod.ensureDeclAnalyzed(decl) catch |err| {
+        if (scope.cast(Scope.Block)) |block| {
+            if (block.func) |func| {
+                func.state = .dependency_failure;
+            } else {
+                block.owner_decl.analysis = .dependency_failure;
+            }
+        } else {
+            scope_decl.analysis = .dependency_failure;
+        }
+        return err;
+    };
+
+    const decl_tv = try decl.typedValue();
+    if (decl_tv.val.tag() == .variable) {
+        return mod.analyzeVarRef(scope, src, decl_tv);
+    }
+    return mod.constInst(scope.arena(), src, .{
+        .ty = try mod.simplePtrType(scope.arena(), decl_tv.ty, false, .One),
+        .val = try Value.Tag.decl_ref.create(scope.arena(), decl),
+    });
+}
+
+fn analyzeVarRef(sema: *Sema, block: *Scope.Block, src: LazySrcLoc, tv: TypedValue) InnerError!*Inst {
+    const variable = tv.val.castTag(.variable).?.data;
+
+    const ty = try mod.simplePtrType(scope.arena(), tv.ty, variable.is_mutable, .One);
+    if (!variable.is_mutable and !variable.is_extern) {
+        return mod.constInst(scope.arena(), src, .{
+            .ty = ty,
+            .val = try Value.Tag.ref_val.create(scope.arena(), variable.init),
+        });
+    }
+
+    try sema.requireRuntimeBlock(block, src);
+    const inst = try b.arena.create(Inst.VarPtr);
+    inst.* = .{
+        .base = .{
+            .tag = .varptr,
+            .ty = ty,
+            .src = src,
+        },
+        .variable = variable,
+    };
+    try b.instructions.append(mod.gpa, &inst.base);
+    return &inst.base;
+}
+
+fn analyzeRef(
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    operand: *Inst,
+) InnerError!*Inst {
+    const ptr_type = try mod.simplePtrType(scope.arena(), operand.ty, false, .One);
+
+    if (operand.value()) |val| {
+        return mod.constInst(scope.arena(), src, .{
+            .ty = ptr_type,
+            .val = try Value.Tag.ref_val.create(scope.arena(), val),
+        });
+    }
+
+    try sema.requireRuntimeBlock(block, src);
+    return block.addUnOp(src, ptr_type, .ref, operand);
+}
+
+fn analyzeDeref(
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    ptr: *Inst,
+    ptr_src: LazySrcLoc,
+) InnerError!*Inst {
+    const elem_ty = switch (ptr.ty.zigTypeTag()) {
+        .Pointer => ptr.ty.elemType(),
+        else => return sema.mod.fail(&block.base, ptr_src, "expected pointer, found '{}'", .{ptr.ty}),
+    };
+    if (ptr.value()) |val| {
+        return mod.constInst(scope.arena(), src, .{
+            .ty = elem_ty,
+            .val = try val.pointerDeref(scope.arena()),
+        });
+    }
+
+    try sema.requireRuntimeBlock(block, src);
+    return mod.addUnOp(b, src, elem_ty, .load, ptr);
+}
+
+fn analyzeIsNull(
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    operand: *Inst,
+    invert_logic: bool,
+) InnerError!*Inst {
+    if (operand.value()) |opt_val| {
+        const is_null = opt_val.isNull();
+        const bool_value = if (invert_logic) !is_null else is_null;
+        return mod.constBool(block.arena, src, bool_value);
+    }
+    try sema.requireRuntimeBlock(block, src);
+    const inst_tag: Inst.Tag = if (invert_logic) .is_non_null else .is_null;
+    return mod.addUnOp(b, src, Type.initTag(.bool), inst_tag, operand);
+}
+
+fn analyzeIsErr(sema: *Sema, block: *Scope.Block, src: LazySrcLoc, operand: *Inst) InnerError!*Inst {
+    const ot = operand.ty.zigTypeTag();
+    if (ot != .ErrorSet and ot != .ErrorUnion) return mod.constBool(block.arena, src, false);
+    if (ot == .ErrorSet) return mod.constBool(block.arena, src, true);
+    assert(ot == .ErrorUnion);
+    if (operand.value()) |err_union| {
+        return mod.constBool(block.arena, src, err_union.getError() != null);
+    }
+    try sema.requireRuntimeBlock(block, src);
+    return mod.addUnOp(b, src, Type.initTag(.bool), .is_err, operand);
+}
+
+fn analyzeSlice(
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    array_ptr: *Inst,
+    start: *Inst,
+    end_opt: ?*Inst,
+    sentinel_opt: ?*Inst,
+    sentinel_src: LazySrcLoc,
+) InnerError!*Inst {
+    const ptr_child = switch (array_ptr.ty.zigTypeTag()) {
+        .Pointer => array_ptr.ty.elemType(),
+        else => return sema.mod.fail(&block.base, src, "expected pointer, found '{}'", .{array_ptr.ty}),
+    };
+
+    var array_type = ptr_child;
+    const elem_type = switch (ptr_child.zigTypeTag()) {
+        .Array => ptr_child.elemType(),
+        .Pointer => blk: {
+            if (ptr_child.isSinglePointer()) {
+                if (ptr_child.elemType().zigTypeTag() == .Array) {
+                    array_type = ptr_child.elemType();
+                    break :blk ptr_child.elemType().elemType();
+                }
+
+                return sema.mod.fail(&block.base, src, "slice of single-item pointer", .{});
+            }
+            break :blk ptr_child.elemType();
+        },
+        else => return sema.mod.fail(&block.base, src, "slice of non-array type '{}'", .{ptr_child}),
+    };
+
+    const slice_sentinel = if (sentinel_opt) |sentinel| blk: {
+        const casted = try sema.coerce(scope, elem_type, sentinel);
+        break :blk try sema.resolveConstValue(block, sentinel_src, casted);
+    } else null;
+
+    var return_ptr_size: std.builtin.TypeInfo.Pointer.Size = .Slice;
+    var return_elem_type = elem_type;
+    if (end_opt) |end| {
+        if (end.value()) |end_val| {
+            if (start.value()) |start_val| {
+                const start_u64 = start_val.toUnsignedInt();
+                const end_u64 = end_val.toUnsignedInt();
+                if (start_u64 > end_u64) {
+                    return sema.mod.fail(&block.base, src, "out of bounds slice", .{});
+                }
+
+                const len = end_u64 - start_u64;
+                const array_sentinel = if (array_type.zigTypeTag() == .Array and end_u64 == array_type.arrayLen())
+                    array_type.sentinel()
+                else
+                    slice_sentinel;
+                return_elem_type = try mod.arrayType(scope, len, array_sentinel, elem_type);
+                return_ptr_size = .One;
+            }
+        }
+    }
+    const return_type = try mod.ptrType(
+        scope,
+        return_elem_type,
+        if (end_opt == null) slice_sentinel else null,
+        0, // TODO alignment
+        0,
+        0,
+        !ptr_child.isConstPtr(),
+        ptr_child.isAllowzeroPtr(),
+        ptr_child.isVolatilePtr(),
+        return_ptr_size,
+    );
+
+    return sema.mod.fail(&block.base, src, "TODO implement analysis of slice", .{});
+}
+
+fn analyzeImport(sema: *Sema, block: *Scope.Block, src: LazySrcLoc, target_string: []const u8) !*Scope.File {
+    const cur_pkg = scope.getFileScope().pkg;
+    const cur_pkg_dir_path = cur_pkg.root_src_directory.path orelse ".";
+    const found_pkg = cur_pkg.table.get(target_string);
+
+    const resolved_path = if (found_pkg) |pkg|
+        try std.fs.path.resolve(mod.gpa, &[_][]const u8{ pkg.root_src_directory.path orelse ".", pkg.root_src_path })
+    else
+        try std.fs.path.resolve(mod.gpa, &[_][]const u8{ cur_pkg_dir_path, target_string });
+    errdefer mod.gpa.free(resolved_path);
+
+    if (mod.import_table.get(resolved_path)) |some| {
+        mod.gpa.free(resolved_path);
+        return some;
+    }
+
+    if (found_pkg == null) {
+        const resolved_root_path = try std.fs.path.resolve(mod.gpa, &[_][]const u8{cur_pkg_dir_path});
+        defer mod.gpa.free(resolved_root_path);
+
+        if (!mem.startsWith(u8, resolved_path, resolved_root_path)) {
+            return error.ImportOutsidePkgPath;
+        }
+    }
+
+    // TODO Scope.Container arena for ty and sub_file_path
+    const file_scope = try mod.gpa.create(Scope.File);
+    errdefer mod.gpa.destroy(file_scope);
+    const struct_ty = try Type.Tag.empty_struct.create(mod.gpa, &file_scope.root_container);
+    errdefer mod.gpa.destroy(struct_ty.castTag(.empty_struct).?);
+
+    file_scope.* = .{
+        .sub_file_path = resolved_path,
+        .source = .{ .unloaded = {} },
+        .tree = undefined,
+        .status = .never_loaded,
+        .pkg = found_pkg orelse cur_pkg,
+        .root_container = .{
+            .file_scope = file_scope,
+            .decls = .{},
+            .ty = struct_ty,
+        },
+    };
+    mod.analyzeContainer(&file_scope.root_container) catch |err| switch (err) {
+        error.AnalysisFail => {
+            assert(mod.comp.totalErrorCount() != 0);
+        },
+        else => |e| return e,
+    };
+    try mod.import_table.put(mod.gpa, file_scope.sub_file_path, file_scope);
+    return file_scope;
+}
+
+/// Asserts that lhs and rhs types are both numeric.
+fn cmpNumeric(
+    sema: *Sema,
+    block: *Scope.Block,
+    src: LazySrcLoc,
+    lhs: *Inst,
+    rhs: *Inst,
+    op: std.math.CompareOperator,
+) InnerError!*Inst {
+    assert(lhs.ty.isNumeric());
+    assert(rhs.ty.isNumeric());
+
+    const lhs_ty_tag = lhs.ty.zigTypeTag();
+    const rhs_ty_tag = rhs.ty.zigTypeTag();
+
+    if (lhs_ty_tag == .Vector and rhs_ty_tag == .Vector) {
+        if (lhs.ty.arrayLen() != rhs.ty.arrayLen()) {
+            return sema.mod.fail(&block.base, src, "vector length mismatch: {d} and {d}", .{
+                lhs.ty.arrayLen(),
+                rhs.ty.arrayLen(),
+            });
+        }
+        return sema.mod.fail(&block.base, src, "TODO implement support for vectors in cmpNumeric", .{});
+    } else if (lhs_ty_tag == .Vector or rhs_ty_tag == .Vector) {
+        return sema.mod.fail(&block.base, src, "mixed scalar and vector operands to comparison operator: '{}' and '{}'", .{
+            lhs.ty,
+            rhs.ty,
+        });
+    }
+
+    if (lhs.value()) |lhs_val| {
+        if (rhs.value()) |rhs_val| {
+            return mod.constBool(block.arena, src, Value.compare(lhs_val, op, rhs_val));
+        }
+    }
+
+    // TODO handle comparisons against lazy zero values
+    // Some values can be compared against zero without being runtime known or without forcing
+    // a full resolution of their value, for example `@sizeOf(@Frame(function))` is known to
+    // always be nonzero, and we benefit from not forcing the full evaluation and stack frame layout
+    // of this function if we don't need to.
+
+    // It must be a runtime comparison.
+    try sema.requireRuntimeBlock(block, src);
+    // For floats, emit a float comparison instruction.
+    const lhs_is_float = switch (lhs_ty_tag) {
+        .Float, .ComptimeFloat => true,
+        else => false,
+    };
+    const rhs_is_float = switch (rhs_ty_tag) {
+        .Float, .ComptimeFloat => true,
+        else => false,
+    };
+    if (lhs_is_float and rhs_is_float) {
+        // Implicit cast the smaller one to the larger one.
+        const dest_type = x: {
+            if (lhs_ty_tag == .ComptimeFloat) {
+                break :x rhs.ty;
+            } else if (rhs_ty_tag == .ComptimeFloat) {
+                break :x lhs.ty;
+            }
+            if (lhs.ty.floatBits(mod.getTarget()) >= rhs.ty.floatBits(mod.getTarget())) {
+                break :x lhs.ty;
+            } else {
+                break :x rhs.ty;
+            }
+        };
+        const casted_lhs = try sema.coerce(scope, dest_type, lhs);
+        const casted_rhs = try sema.coerce(scope, dest_type, rhs);
+        return mod.addBinOp(b, src, dest_type, Inst.Tag.fromCmpOp(op), casted_lhs, casted_rhs);
+    }
+    // For mixed unsigned integer sizes, implicit cast both operands to the larger integer.
+    // For mixed signed and unsigned integers, implicit cast both operands to a signed
+    // integer with + 1 bit.
+    // For mixed floats and integers, extract the integer part from the float, cast that to
+    // a signed integer with mantissa bits + 1, and if there was any non-integral part of the float,
+    // add/subtract 1.
+    const lhs_is_signed = if (lhs.value()) |lhs_val|
+        lhs_val.compareWithZero(.lt)
+    else
+        (lhs.ty.isFloat() or lhs.ty.isSignedInt());
+    const rhs_is_signed = if (rhs.value()) |rhs_val|
+        rhs_val.compareWithZero(.lt)
+    else
+        (rhs.ty.isFloat() or rhs.ty.isSignedInt());
+    const dest_int_is_signed = lhs_is_signed or rhs_is_signed;
+
+    var dest_float_type: ?Type = null;
+
+    var lhs_bits: usize = undefined;
+    if (lhs.value()) |lhs_val| {
+        if (lhs_val.isUndef())
+            return mod.constUndef(scope, src, Type.initTag(.bool));
+        const is_unsigned = if (lhs_is_float) x: {
+            var bigint_space: Value.BigIntSpace = undefined;
+            var bigint = try lhs_val.toBigInt(&bigint_space).toManaged(mod.gpa);
+            defer bigint.deinit();
+            const zcmp = lhs_val.orderAgainstZero();
+            if (lhs_val.floatHasFraction()) {
+                switch (op) {
+                    .eq => return mod.constBool(block.arena, src, false),
+                    .neq => return mod.constBool(block.arena, src, true),
+                    else => {},
+                }
+                if (zcmp == .lt) {
+                    try bigint.addScalar(bigint.toConst(), -1);
+                } else {
+                    try bigint.addScalar(bigint.toConst(), 1);
+                }
+            }
+            lhs_bits = bigint.toConst().bitCountTwosComp();
+            break :x (zcmp != .lt);
+        } else x: {
+            lhs_bits = lhs_val.intBitCountTwosComp();
+            break :x (lhs_val.orderAgainstZero() != .lt);
+        };
+        lhs_bits += @boolToInt(is_unsigned and dest_int_is_signed);
+    } else if (lhs_is_float) {
+        dest_float_type = lhs.ty;
+    } else {
+        const int_info = lhs.ty.intInfo(mod.getTarget());
+        lhs_bits = int_info.bits + @boolToInt(int_info.signedness == .unsigned and dest_int_is_signed);
+    }
+
+    var rhs_bits: usize = undefined;
+    if (rhs.value()) |rhs_val| {
+        if (rhs_val.isUndef())
+            return mod.constUndef(scope, src, Type.initTag(.bool));
+        const is_unsigned = if (rhs_is_float) x: {
+            var bigint_space: Value.BigIntSpace = undefined;
+            var bigint = try rhs_val.toBigInt(&bigint_space).toManaged(mod.gpa);
+            defer bigint.deinit();
+            const zcmp = rhs_val.orderAgainstZero();
+            if (rhs_val.floatHasFraction()) {
+                switch (op) {
+                    .eq => return mod.constBool(block.arena, src, false),
+                    .neq => return mod.constBool(block.arena, src, true),
+                    else => {},
+                }
+                if (zcmp == .lt) {
+                    try bigint.addScalar(bigint.toConst(), -1);
+                } else {
+                    try bigint.addScalar(bigint.toConst(), 1);
+                }
+            }
+            rhs_bits = bigint.toConst().bitCountTwosComp();
+            break :x (zcmp != .lt);
+        } else x: {
+            rhs_bits = rhs_val.intBitCountTwosComp();
+            break :x (rhs_val.orderAgainstZero() != .lt);
+        };
+        rhs_bits += @boolToInt(is_unsigned and dest_int_is_signed);
+    } else if (rhs_is_float) {
+        dest_float_type = rhs.ty;
+    } else {
+        const int_info = rhs.ty.intInfo(mod.getTarget());
+        rhs_bits = int_info.bits + @boolToInt(int_info.signedness == .unsigned and dest_int_is_signed);
+    }
+
+    const dest_type = if (dest_float_type) |ft| ft else blk: {
+        const max_bits = std.math.max(lhs_bits, rhs_bits);
+        const casted_bits = std.math.cast(u16, max_bits) catch |err| switch (err) {
+            error.Overflow => return sema.mod.fail(&block.base, src, "{d} exceeds maximum integer bit count", .{max_bits}),
+        };
+        break :blk try mod.makeIntType(scope, dest_int_is_signed, casted_bits);
+    };
+    const casted_lhs = try sema.coerce(scope, dest_type, lhs);
+    const casted_rhs = try sema.coerce(scope, dest_type, rhs);
+
+    return mod.addBinOp(b, src, Type.initTag(.bool), Inst.Tag.fromCmpOp(op), casted_lhs, casted_rhs);
+}
+
+fn wrapOptional(sema: *Sema, block: *Scope.Block, dest_type: Type, inst: *Inst) !*Inst {
+    if (inst.value()) |val| {
+        return mod.constInst(scope.arena(), inst.src, .{ .ty = dest_type, .val = val });
+    }
+
+    try sema.requireRuntimeBlock(block, inst.src);
+    return mod.addUnOp(b, inst.src, dest_type, .wrap_optional, inst);
+}
+
+fn wrapErrorUnion(sema: *Sema, block: *Scope.Block, dest_type: Type, inst: *Inst) !*Inst {
+    // TODO deal with inferred error sets
+    const err_union = dest_type.castTag(.error_union).?;
+    if (inst.value()) |val| {
+        const to_wrap = if (inst.ty.zigTypeTag() != .ErrorSet) blk: {
+            _ = try sema.coerce(scope, err_union.data.payload, inst);
+            break :blk val;
+        } else switch (err_union.data.error_set.tag()) {
+            .anyerror => val,
+            .error_set_single => blk: {
+                const n = err_union.data.error_set.castTag(.error_set_single).?.data;
+                if (!mem.eql(u8, val.castTag(.@"error").?.data.name, n))
+                    return sema.mod.fail(&block.base, inst.src, "expected type '{}', found type '{}'", .{ err_union.data.error_set, inst.ty });
+                break :blk val;
+            },
+            .error_set => blk: {
+                const f = err_union.data.error_set.castTag(.error_set).?.data.typed_value.most_recent.typed_value.val.castTag(.error_set).?.data.fields;
+                if (f.get(val.castTag(.@"error").?.data.name) == null)
+                    return sema.mod.fail(&block.base, inst.src, "expected type '{}', found type '{}'", .{ err_union.data.error_set, inst.ty });
+                break :blk val;
+            },
+            else => unreachable,
+        };
+
+        return mod.constInst(scope.arena(), inst.src, .{
+            .ty = dest_type,
+            // creating a SubValue for the error_union payload
+            .val = try Value.Tag.error_union.create(
+                scope.arena(),
+                to_wrap,
+            ),
+        });
+    }
+
+    try sema.requireRuntimeBlock(block, inst.src);
+
+    // we are coercing from E to E!T
+    if (inst.ty.zigTypeTag() == .ErrorSet) {
+        var coerced = try sema.coerce(scope, err_union.data.error_set, inst);
+        return mod.addUnOp(b, inst.src, dest_type, .wrap_errunion_err, coerced);
+    } else {
+        var coerced = try sema.coerce(scope, err_union.data.payload, inst);
+        return mod.addUnOp(b, inst.src, dest_type, .wrap_errunion_payload, coerced);
+    }
+}
+
+fn resolvePeerTypes(sema: *Sema, block: *Scope.Block, instructions: []*Inst) !Type {
+    if (instructions.len == 0)
+        return Type.initTag(.noreturn);
+
+    if (instructions.len == 1)
+        return instructions[0].ty;
+
+    var chosen = instructions[0];
+    for (instructions[1..]) |candidate| {
+        if (candidate.ty.eql(chosen.ty))
+            continue;
+        if (candidate.ty.zigTypeTag() == .NoReturn)
+            continue;
+        if (chosen.ty.zigTypeTag() == .NoReturn) {
+            chosen = candidate;
+            continue;
+        }
+        if (candidate.ty.zigTypeTag() == .Undefined)
+            continue;
+        if (chosen.ty.zigTypeTag() == .Undefined) {
+            chosen = candidate;
+            continue;
+        }
+        if (chosen.ty.isInt() and
+            candidate.ty.isInt() and
+            chosen.ty.isSignedInt() == candidate.ty.isSignedInt())
+        {
+            if (chosen.ty.intInfo(mod.getTarget()).bits < candidate.ty.intInfo(mod.getTarget()).bits) {
+                chosen = candidate;
+            }
+            continue;
+        }
+        if (chosen.ty.isFloat() and candidate.ty.isFloat()) {
+            if (chosen.ty.floatBits(mod.getTarget()) < candidate.ty.floatBits(mod.getTarget())) {
+                chosen = candidate;
+            }
+            continue;
+        }
+
+        if (chosen.ty.zigTypeTag() == .ComptimeInt and candidate.ty.isInt()) {
+            chosen = candidate;
+            continue;
+        }
+
+        if (chosen.ty.isInt() and candidate.ty.zigTypeTag() == .ComptimeInt) {
+            continue;
+        }
+
+        // TODO error notes pointing out each type
+        return sema.mod.fail(&block.base, candidate.src, "incompatible types: '{}' and '{}'", .{ chosen.ty, candidate.ty });
+    }
+
+    return chosen.ty;
 }
BRANCH_TODO
@@ -0,0 +1,126 @@
+this is my WIP branch scratch pad, to be deleted before merging into master
+
+Merge TODO list:
+ * fix discrepancy between TZIR wanting src: usize (byte offset) and Sema
+   now providing LazySrcLoc
+ * fix compile errors
+ * don't have an explicit dbg_stmt zir instruction - instead merge it with
+   var decl and assignment instructions, etc.
+   - make it set sema.src where appropriate
+ * remove the LazySrcLoc.todo tag
+ * update astgen.zig
+ * finish updating Sema.zig
+ * finish implementing SrcLoc byteOffset function
+
+
+Performance optimizations to look into:
+ * don't store end index for blocks; rely on last instruction being noreturn
+ * introduce special form for function call statement with 0 or 1 parameters
+ * look into not storing the field name of field access as a string in zir
+   instructions. or, look into introducing interning to string_bytes (local
+   to the owner Decl), or, look into allowing field access based on a token/node
+   and have it reference source code bytes. Another idea: null terminated
+   string variants which avoid having to store the length.
+   - Look into this for enum literals too
+
+
+Random snippets of code that I deleted and need to make sure get
+re-integrated appropriately:
+
+
+fn zirArg(mod: *Module, scope: *Scope, inst: *zir.Inst.Arg) InnerError!*Inst {
+    const fn_ty = b.func.?.owner_decl.typed_value.most_recent.typed_value.ty;
+    const param_index = b.instructions.items.len;
+    const param_count = fn_ty.fnParamLen();
+    if (param_index >= param_count) {
+        return mod.fail(scope, inst.base.src, "parameter index {d} outside list of length {d}", .{
+            param_index,
+            param_count,
+        });
+    }
+    const param_type = fn_ty.fnParamType(param_index);
+    const name = try scope.arena().dupeZ(u8, inst.positionals.name);
+    return mod.addArg(b, inst.base.src, param_type, name);
+}
+
+
+fn zirReturnVoid(mod: *Module, scope: *Scope, inst: *zir.Inst.NoOp) InnerError!*Inst {
+    const tracy = trace(@src());
+    defer tracy.end();
+    const b = try mod.requireFunctionBlock(scope, inst.base.src);
+    if (b.inlining) |inlining| {
+        // We are inlining a function call; rewrite the `retvoid` as a `breakvoid`.
+        const void_inst = try mod.constVoid(scope, inst.base.src);
+        try inlining.merges.results.append(mod.gpa, void_inst);
+        const br = try mod.addBr(b, inst.base.src, inlining.merges.block_inst, void_inst);
+        return &br.base;
+    }
+
+    if (b.func) |func| {
+        // Need to emit a compile error if returning void is not allowed.
+        const void_inst = try mod.constVoid(scope, inst.base.src);
+        const fn_ty = func.owner_decl.typed_value.most_recent.typed_value.ty;
+        const casted_void = try mod.coerce(scope, fn_ty.fnReturnType(), void_inst);
+        if (casted_void.ty.zigTypeTag() != .Void) {
+            return mod.addUnOp(b, inst.base.src, Type.initTag(.noreturn), .ret, casted_void);
+        }
+    }
+    return mod.addNoOp(b, inst.base.src, Type.initTag(.noreturn), .retvoid);
+}
+
+
+fn zirReturn(mod: *Module, scope: *Scope, inst: *zir.Inst.UnOp) InnerError!*Inst {
+    const tracy = trace(@src());
+    defer tracy.end();
+    const operand = try resolveInst(mod, scope, inst.positionals.operand);
+    const b = try mod.requireFunctionBlock(scope, inst.base.src);
+
+    if (b.inlining) |inlining| {
+        // We are inlining a function call; rewrite the `ret` as a `break`.
+        try inlining.merges.results.append(mod.gpa, operand);
+        const br = try mod.addBr(b, inst.base.src, inlining.merges.block_inst, operand);
+        return &br.base;
+    }
+
+    return mod.addUnOp(b, inst.base.src, Type.initTag(.noreturn), .ret, operand);
+}
+
+fn zirPrimitive(mod: *Module, scope: *Scope, primitive: *zir.Inst.Primitive) InnerError!*Inst {
+    const tracy = trace(@src());
+    defer tracy.end();
+    return mod.constInst(scope, primitive.base.src, primitive.positionals.tag.toTypedValue());
+}
+
+
+
+
+    /// Each Decl gets its own string interning, in order to avoid contention when
+    /// using multiple threads to analyze Decls in parallel. Any particular Decl will only
+    /// be touched by a single thread at one time.
+    strings: StringTable = .{},
+
+    /// The string memory referenced here is stored inside the Decl's arena.
+    pub const StringTable = std.StringArrayHashMapUnmanaged(void);
+
+
+
+
+pub fn errSrcLoc(mod: *Module, scope: *Scope, src: LazySrcLoc) SrcLoc {
+    const file_scope = scope.getFileScope();
+    switch (src) {
+        .byte_offset => |off| return .{
+            .file_scope = file_scope,
+            .byte_offset = off,
+        },
+        .token_offset => |off| {
+            @panic("TODO errSrcLoc for token_offset");
+        },
+        .node_offset => |off| {
+            @panic("TODO errSrcLoc for node_offset");
+        },
+        .node_offset_var_decl_ty => |off| {
+            @panic("TODO errSrcLoc for node_offset_var_decl_ty");
+        },
+    }
+}
+