Eliot Miranda uploaded a new version of VMMaker to project VM Maker: http://source.squeak.org/VMMaker/VMMaker.oscog-eem.2189.mcz ==================== Summary ==================== Name: VMMaker.oscog-eem.2189 Author: eem Time: 5 April 2017, 1:14:07.351736 pm UUID: 5f33d87d-a790-44c5-8250-aae6fba7da74 Ancestors: VMMaker.oscog-eem.2188 RegisterAllocatingCogit/SistaRegisterAllocatingCogit Fix slips in RegisterAllocatingCogit>>genSpecialSelectorComparison (no need to juggle assigned registers) and mergeCurrentSimStackWith:forwards: (methodOrBlockNumTemps is the loop limit, not methodOrBlockNumArgs). Fix moveVolatileSimStackEntriesToRegisters to observe the registers assigned to temps. Copy the merge logic into SistaRegisterAllocatingCogit>>genSpecialSelectorComparison Add more places to check the duplicateRegisterAssignmentsInTemporaries deny. Rewrite RegisterAllocatingCogit>>resetSimStack: to avoid a spurious assert-fail when preparing the second compilation pass. SistaCogitClone's methods have also been auto-updated. =============== Diff against VMMaker.oscog-eem.2188 =============== Item was added: + ----- Method: RegisterAllocatingCogit>>allocatedRegisters (in category 'simulation stack') ----- + allocatedRegisters + | regsSet | + self assert: needsFrame. + regsSet := 0. + 0 to: methodOrBlockNumTemps do: + [:i| + regsSet := regsSet bitOr: (self simStackAt: i) registerMask]. + ^regsSet! Item was changed: ----- Method: RegisterAllocatingCogit>>compileAbstractInstructionsFrom:through: (in category 'compile abstract instructions') ----- compileAbstractInstructionsFrom: start through: end "Loop over bytecodes, dispatching to the generator for each bytecode, handling fixups in due course. Override to provide a development-time only escape for failed merges due to partially implemented parallel move. Override to recompile after a loop requiring a merge is detected." ^[| result initialOpcodeIndex initialCounterIndex initialIndexOfIRC | compilationPass := 1. initialOpcodeIndex := opcodeIndex. initialCounterIndex := self maybeCounterIndex."for SistaCogit" literalsManager saveForRecompile. NewspeakVM ifTrue: [initialIndexOfIRC := indexOfIRC]. [recompileForLoopRegisterAssignments := false. result := super compileAbstractInstructionsFrom: start through: end. result = 0 and: [recompileForLoopRegisterAssignments]] whileTrue: + [self assert: compilationPass <= 2. + self reinitializeAllButBackwardFixupsFrom: start through: end. - [self reinitializeAllButBackwardFixupsFrom: start through: end. self resetSimStack: start. self reinitializeOpcodesFrom: initialOpcodeIndex to: opcodeIndex - 1. compilationPass := compilationPass + 1. nextFixup := 0. opcodeIndex := initialOpcodeIndex. self maybeSetCounterIndex: initialCounterIndex. "For SistaCogit" literalsManager resetForRecompile. NewspeakVM ifTrue: [indexOfIRC := initialIndexOfIRC]]. result] on: Notification do: [:ex| ex tag == #failedMerge ifTrue: [coInterpreter transcript ensureCr; nextPutAll: 'FAILED MERGE IN '; nextPutAll: (coInterpreter nameOfClass: (coInterpreter methodClassOf: methodObj)); nextPutAll: '>>#'; nextPutAll: (coInterpreter stringOf: (coInterpreter maybeSelectorOfMethod: methodObj)); flush. ^ShouldNotJIT]. ex pass]! Item was changed: ----- Method: RegisterAllocatingCogit>>genSpecialSelectorComparison (in category 'bytecode generators') ----- genSpecialSelectorComparison | nextPC postBranchPC targetPC primDescriptor branchDescriptor rcvrIsInt rcvrIsConst argIsIntConst argInt jumpNotSmallInts inlineCAB index rcvrReg argReg branchToTarget needMergeToContinue needMergeToTarget | <var: #primDescriptor type: #'BytecodeDescriptor *'> <var: #branchDescriptor type: #'BytecodeDescriptor *'> <var: #jumpNotSmallInts type: #'AbstractInstruction *'> primDescriptor := self generatorAt: byte0. argIsIntConst := self ssTop type = SSConstant and: [objectMemory isIntegerObject: (argInt := self ssTop constant)]. rcvrIsInt := ((rcvrIsConst := (self ssValue: 1) type = SSConstant) and: [objectMemory isIntegerObject: (self ssValue: 1) constant]) or: [self mclassIsSmallInteger and: [(self ssValue: 1) isSameEntryAs: (self addressOf: simSelf)]]. (argIsIntConst and: [rcvrIsInt and: [rcvrIsConst]]) ifTrue: [^self genStaticallyResolvedSpecialSelectorComparison]. self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target | branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetPC := target ]. "Only interested in inlining if followed by a conditional branch." inlineCAB := branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]. "Further, only interested in inlining = and ~= if there's a SmallInteger constant involved. The relational operators successfully statically predict SmallIntegers; the equality operators do not." (inlineCAB and: [primDescriptor opcode = JumpZero or: [primDescriptor opcode = JumpNonZero]]) ifTrue: [inlineCAB := argIsIntConst or: [rcvrIsInt]]. inlineCAB ifFalse: [^self genSpecialSelectorSend]. "In-line the comparison and the jump, but if the types are not SmallInteger then we will need to do a send and fall through to the following conditional branch. Since we're allocating values in registers we would like to keep those registers live on the inlined path and reload registers along the non-inlined send path. The merge logic at the branch destinations handles this." argIsIntConst ifTrue: [rcvrReg := self allocateRegForStackEntryAt: 1. (self ssValue: 1) popToReg: rcvrReg. argReg := NoReg] ifFalse: [self allocateRegForStackTopTwoEntriesInto: [:rTop :rNext| argReg := rTop. rcvrReg := rNext]. - rcvrReg = Arg0Reg ifTrue: - [rcvrReg := argReg. argReg := Arg0Reg]. self ssTop popToReg: argReg. (self ssValue: 1) popToReg: rcvrReg]. self ssPop: 2. jumpNotSmallInts := (rcvrIsInt and: [argIsIntConst]) ifFalse: [argIsIntConst ifTrue: [objectRepresentation genJumpNotSmallInteger: rcvrReg] ifFalse: [rcvrIsInt ifTrue: [objectRepresentation genJumpNotSmallInteger: argReg] ifFalse: [objectRepresentation genJumpNotSmallIntegersIn: rcvrReg and: argReg scratch: TempReg]]]. argIsIntConst ifTrue: [self CmpCq: argInt R: rcvrReg] ifFalse: [self CmpR: argReg R: rcvrReg]. "self printSimStack; printSimStack: (self fixupAt: postBranchPC) mergeSimStack; printSimStack: (self fixupAt: targetPC) mergeSimStack" "If there are merges to be performed on the forward branches we have to execute the merge code only along the path requiring that merge, and exactly once." needMergeToTarget := self mergeRequiredForJumpTo: targetPC. needMergeToContinue := self mergeRequiredForJumpTo: postBranchPC. "Cmp is weird/backwards so invert the comparison." (needMergeToTarget and: [needMergeToContinue]) ifTrue: [branchToTarget := self genConditionalBranch: (branchDescriptor isBranchTrue ifTrue: [primDescriptor opcode] ifFalse: [self inverseBranchFor: primDescriptor opcode]) operand: 0. self Jump: (self ensureFixupAt: postBranchPC). branchToTarget jmpTarget: self Label. self Jump: (self ensureFixupAt: targetPC)]. (needMergeToTarget and: [needMergeToContinue not]) ifTrue: [self genConditionalBranch: (branchDescriptor isBranchFalse ifTrue: [primDescriptor opcode] ifFalse: [self inverseBranchFor: primDescriptor opcode]) operand: (self ensureFixupAt: postBranchPC) asUnsignedInteger. self Jump: (self ensureFixupAt: targetPC)]. (needMergeToTarget not and: [needMergeToContinue]) ifTrue: [self genConditionalBranch: (branchDescriptor isBranchTrue ifTrue: [primDescriptor opcode] ifFalse: [self inverseBranchFor: primDescriptor opcode]) operand: (self ensureFixupAt: targetPC) asUnsignedInteger. self Jump: (self ensureFixupAt: postBranchPC)]. (needMergeToTarget or: [needMergeToContinue]) ifFalse: [self genConditionalBranch: (branchDescriptor isBranchTrue ifTrue: [primDescriptor opcode] ifFalse: [self inverseBranchFor: primDescriptor opcode]) operand: (self ensureFixupAt: targetPC) asUnsignedInteger. self Jump: (self ensureFixupAt: postBranchPC)]. jumpNotSmallInts ifNil: [self annotateInstructionForBytecode. deadCode := true. ^0]. jumpNotSmallInts jmpTarget: self Label. self ssFlushTo: simStackPtr. rcvrReg = Arg0Reg ifTrue: [argReg = ReceiverResultReg ifTrue: [self SwapR: Arg0Reg R: Arg0Reg Scratch: TempReg. argReg := Arg0Reg] ifFalse: [self MoveR: rcvrReg R: ReceiverResultReg]. rcvrReg := ReceiverResultReg]. argIsIntConst ifTrue: [self MoveCq: argInt R: Arg0Reg] ifFalse: [argReg ~= Arg0Reg ifTrue: [self MoveR: argReg R: Arg0Reg]]. rcvrReg ~= ReceiverResultReg ifTrue: [self MoveR: rcvrReg R: ReceiverResultReg]. index := byte0 - self firstSpecialSelectorBytecodeOffset. ^self genMarshalledSend: index negated - 1 numArgs: 1 sendTable: ordinarySendTrampolines! Item was changed: ----- Method: RegisterAllocatingCogit>>liveRegisters (in category 'simulation stack') ----- liveRegisters | regsSet | + self assert: needsFrame. regsSet := 0. 0 to: simStackPtr do: [:i| regsSet := regsSet bitOr: (self simStackAt: i) registerMask]. LowcodeVM ifTrue: [(simNativeSpillBase max: 0) to: simNativeStackPtr do: [:i| regsSet := regsSet bitOr: (self simNativeStackAt: i) nativeRegisterMask]]. ^regsSet! Item was changed: ----- Method: RegisterAllocatingCogit>>mergeCurrentSimStackWith:forwards: (in category 'bytecode generator support') ----- mergeCurrentSimStackWith: fixup forwards: forwards "At a merge point the cogit expects the stack to be in the same state as mergeSimStack. mergeSimStack is the state as of some jump forward or backward to this point. So make simStack agree with mergeSimStack (it is, um, problematic to plant code at the jump). Values may have to be assigned to registers. Registers may have to be swapped. The state of optStatus must agree. Generate code to merge the current simStack with that of the target fixup, the goal being to keep as many registers live as possible. If the merge is forwards registers can be deassigned (since registers are always written to temp vars). But if backwards, nothing can be deassigned, and the state /must/ reflect the target." "self printSimStack; printSimStack: fixup mergeSimStack" "abstractOpcodes object copyFrom: startIndex to: opcodeIndex" <var: #fixup type: #'BytecodeFixup *'> | startIndex mergeSimStack currentEntry targetEntry writtenToRegisters | <var: #mergeSimStack type: #'SimStackEntry *'> <var: #targetEntry type: #'SimStackEntry *'> <var: #currentEntry type: #'SimStackEntry *'> (mergeSimStack := fixup mergeSimStack) ifNil: [^self]. startIndex := opcodeIndex. "for debugging" "Assignments amongst the registers must be made in order to avoid overwriting. If necessary exchange registers amongst simStack's entries to resolve any conflicts." self resolveRegisterOrderConflictsBetweenCurrentSimStackAnd: mergeSimStack. (self asserta: (self conflictsResolvedBetweenSimStackAnd: mergeSimStack)) ifFalse: [Notification new tag: #failedMerge; signal]. + "Compute written to registers. Perhaps we should use 0 in place of methodOrBlockNumTemps - "Compute written to registers. Perhaps we should use 0 in place of methodOrBlockNumArgs but Smalltalk does not assign to arguments." writtenToRegisters := 0. (self pushForMergeWith: mergeSimStack) ifTrue: + [methodOrBlockNumTemps to: simStackPtr do: - [methodOrBlockNumArgs to: simStackPtr do: [:i| currentEntry := self simStack: simStack at: i. targetEntry := self simStack: mergeSimStack at: i. writtenToRegisters := writtenToRegisters bitOr: targetEntry registerMask. (currentEntry reconcileForwardsWith: targetEntry) ifTrue: + [self assert: i >= methodOrBlockNumTemps. - [self assert: i >= methodOrBlockNumArgs. self deassignRegisterForTempVar: targetEntry in: mergeSimStack. targetEntry type: SSRegister; register: targetEntry liveRegister]. "Note, we could update the simStack and spillBase here but that is done in restoreSimStackAtMergePoint: spilled ifFalse: [simSpillBase := i - 1]. simStack at: i put: (self cCode: [mergeSimStack at: i] inSmalltalk: [(mergeSimStack at: i) copy])"]] ifFalse: + [simStackPtr to: methodOrBlockNumTemps by: -1 do: - [simStackPtr to: methodOrBlockNumArgs by: -1 do: [:i| currentEntry := self simStack: simStack at: i. targetEntry := self simStack: mergeSimStack at: i. writtenToRegisters := writtenToRegisters bitOr: targetEntry registerMask. (currentEntry reconcileForwardsWith: targetEntry) ifTrue: + [self assert: i >= methodOrBlockNumTemps. - [self assert: i >= methodOrBlockNumArgs. self deassignRegisterForTempVar: targetEntry in: mergeSimStack. targetEntry type: SSRegister; register: targetEntry liveRegister]. "Note, we could update the simStack and spillBase here but that is done in restoreSimStackAtMergePoint: spilled ifFalse: [simSpillBase := i - 1]. simStack at: i put: (self cCode: [mergeSimStack at: i] inSmalltalk: [(mergeSimStack at: i) copy])"]]. "Note that since we've deassigned any conflicts beyond the temps above we need only compare the temps here." methodOrBlockNumTemps - 1 to: 0 by: -1 do: [:i| targetEntry := self simStack: mergeSimStack at: i. (targetEntry registerMask noMask: writtenToRegisters) ifTrue: [currentEntry := self simStack: simStack at: i. writtenToRegisters := writtenToRegisters bitOr: targetEntry registerMask. (currentEntry reconcileForwardsWith: targetEntry) ifTrue: [self assert: i >= methodOrBlockNumArgs. self deassignRegisterForTempVar: targetEntry in: mergeSimStack]]]. optStatus isReceiverResultRegLive ifFalse: [forwards ifTrue: "a.k.a. fixup isReceiverResultRegSelf: (fixup isReceiverResultRegSelf and: [optStatus isReceiverResultRegLive])" [fixup isReceiverResultRegSelf: false] ifFalse: [fixup isReceiverResultRegSelf ifTrue: [self putSelfInReceiverResultReg]]]! Item was changed: ----- Method: RegisterAllocatingCogit>>mergeWithFixupIfRequired: (in category 'simulation stack') ----- mergeWithFixupIfRequired: fixup "If this bytecode has a fixup, some kind of merge needs to be done. There are 4 cases: 1) the bytecode has no fixup (fixup isNotAFixup) do nothing 2) the bytecode has a non merge fixup the fixup has needsNonMergeFixup. The code generating non merge fixup (currently only special selector code) is responsible for the merge so no need to do it. We set deadCode to false as the instruction can be reached from jumps. 3) the bytecode has a merge fixup, but execution flow *cannot* fall through to the merge point. the fixup has needsMergeFixup and deadCode = true. ignores the current simStack as it does not mean anything restores the simStack to the state the jumps to the merge point expects it to be. 4) the bytecode has a merge fixup and execution flow *can* fall through to the merge point. the fixup has needsMergeFixup and deadCode = false. Merge the state into the fixup's state via mergeCurrentSimStackWith:forwards:. In addition, if this is a backjump merge point, we patch the fixup to hold the current simStackPtr for later assertions. self printSimStack: fixup mergeSimStack" <var: #fixup type: #'BytecodeFixup *'> + self deny: self duplicateRegisterAssignmentsInTemporaries. + "case 1" fixup notAFixup ifTrue: [^0]. "case 2" fixup isNonMergeFixup ifTrue: [deadCode ifTrue: [self deny: fixup simStackPtr isNil. simStackPtr := fixup simStackPtr. self restoreSimStackAtMergePoint: fixup. deadCode := false] ifFalse: [self flushRegistersOnlyLiveOnFallThrough: fixup]. ^0]. "cases 3 and 4" self assert: fixup isMergeFixup. self traceMerge: fixup. deadCode ifTrue: [simStackPtr := fixup simStackPtr] "case 3" ifFalse: [(fixup isBackwardBranchFixup and: [compilationPass > 1]) ifTrue: [fixup simStackPtr: simStackPtr]. self mergeCurrentSimStackWith: fixup forwards: true]. "case 4" "cases 3 and 4" deadCode := false. fixup isBackwardBranchFixup ifTrue: [self assert: fixup mergeSimStack isNil == (compilationPass = 1). fixup mergeSimStack ifNil: [self setMergeSimStackOf: fixup]]. fixup targetInstruction: self Label. self assert: simStackPtr = fixup simStackPtr. self cCode: '' inSmalltalk: [self assert: fixup simStackPtr = (self debugStackPointerFor: bytecodePC)]. self restoreSimStackAtMergePoint: fixup. ^0! Item was changed: ----- Method: RegisterAllocatingCogit>>moveVolatileSimStackEntriesToRegisters (in category 'bytecode generator support') ----- moveVolatileSimStackEntriesToRegisters "When jumping forward to a merge point the stack must be reconcilable with the state that falls through to the merge point. We cannot easily arrange that later we add code to the branch, e.g. to spill values. Instead, any volatile contents must be moved to registers. [In fact, that's not exactly true, consider these two code sequences: self at: (expr ifTrue: [1] ifFalse: [2]) put: a self at: 1 put: (expr ifTrue: [a] ifFalse: [b]) The first one needs 1 saving to a register to reconcile with 2. The second one has 1 on both paths, but we're not clever enough to spot this case yet.] Volatile contents are anything not spilled to the stack, because as yet we can only merge registers." <inline: true> + | allocatedRegs | <var: #desc type: #'SimStackEntry *'> + allocatedRegs := self allocatedRegisters. (simSpillBase max: 0) to: simStackPtr do: [:i| | desc reg | desc := self simStackAt: i. desc spilled ifTrue: [simSpillBase := i] ifFalse: [desc registerOrNone = NoReg ifTrue: + [reg := self allocateRegNotConflictingWith: allocatedRegs. - [reg := self allocateRegNotConflictingWith: 0. reg = NoReg ifTrue: [self halt] "have to spill" + ifFalse: + [desc storeToReg: reg. + allocatedRegs := allocatedRegs bitOr: (self registerMaskFor: reg)]]]]. + self deny: self duplicateRegisterAssignmentsInTemporaries! - ifFalse: [desc storeToReg: reg]]]]! Item was changed: ----- Method: RegisterAllocatingCogit>>resetSimStack: (in category 'bytecode generator support') ----- resetSimStack: startPC <inline: true> simSpillBase := methodOrBlockNumTemps. simStackPtr := methodOrBlockNumTemps - 1. + optStatus isReceiverResultRegLive: false. - self voidReceiverResultRegContainsSelf. self flushLiveRegistersForSend. self cCode: '' inSmalltalk: [0 to: methodOrBlockNumTemps - 1 do: [:i| (self simStackAt: i) bcptr: startPC]]! Item was added: + ----- Method: SistaCogitClone class>>methodZoneClass (in category 'accessing class hierarchy') ----- + methodZoneClass + ^SistaMethodZone! Item was added: + ----- Method: SistaCogitClone>>defaultCogCodeSize (in category 'accessing') ----- + defaultCogCodeSize + "Return the default number of bytes to allocate for native code at startup. + The actual value can be set via vmParameterAt: and/or a preference in the ini file." + <api> + ^2 * backEnd getDefaultCogCodeSize! Item was changed: ----- Method: SistaCogitClone>>disassembleMethod:on: (in category 'disassembly') ----- disassembleMethod: surrogateOrAddress on: aStream <doNotGenerate> | cogMethod | cogMethod := super disassembleMethod: surrogateOrAddress on: aStream. (cogMethod cmType = CMMethod and: [cogMethod counters ~= 0]) ifTrue: [aStream nextPutAll: 'counters:'; cr. + 0 to: (objectRepresentation numCountersFor: cogMethod counters) - 1 do: - numCounters := objectRepresentation numCountersFor: counters. - 0 to: numCounters - 1 do: [:i| | addr | addr := i * CounterBytes + counters. addr printOn: aStream base: 16. aStream nextPut: $:; space. (objectMemory long32At: addr) printOn: aStream base: 16. aStream cr]. aStream flush]! Item was added: + ----- Method: SistaCogitClone>>genAtPutInlinePrimitive: (in category 'inline primitive generators') ----- + genAtPutInlinePrimitive: prim + "Unary inline primitives." + "SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution. + See EncoderForSistaV1's class comment and StackInterpreter>>#trinaryInlinePrimitive:" + | ra1 ra2 rr adjust needsStoreCheck | + "The store check requires rr to be ReceiverResultReg" + needsStoreCheck := (objectRepresentation isUnannotatableConstant: self ssTop) not. + self + allocateRegForStackTopThreeEntriesInto: [:rTop :rNext :rThird | ra2 := rTop. ra1 := rNext. rr := rThird ] + thirdIsReceiver: (prim = 0 and: [ needsStoreCheck ]). + self assert: (rr ~= ra1 and: [rr ~= ra2 and: [ra1 ~= ra2]]). + self ssTop popToReg: ra2. + self ssPop: 1. + self ssTop popToReg: ra1. + self ssPop: 1. + self ssTop popToReg: rr. + self ssPop: 1. + objectRepresentation genConvertSmallIntegerToIntegerInReg: ra1. + "Now: ra is the variable object, rr is long, TempReg holds the value to store." + self flag: #TODO. "This is not really working as the immutability and store check needs to be present. " + prim caseOf: { + "0 - 1 pointerAt:put: and byteAt:Put:" + [0] -> [ adjust := (objectMemory baseHeaderSize >> objectMemory shiftForWord) - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative" + adjust ~= 0 ifTrue: [ self AddCq: adjust R: ra1. ]. + self MoveR: ra2 Xwr: ra1 R: rr. + "I added needsStoreCheck so if you initialize an array with a Smi such as 0 or a boolean you don't need the store check" + needsStoreCheck ifTrue: + [ self assert: needsFrame. + objectRepresentation genStoreCheckReceiverReg: rr valueReg: ra2 scratchReg: TempReg inFrame: true] ]. + [1] -> [ objectRepresentation genConvertSmallIntegerToIntegerInReg: ra2. + adjust := objectMemory baseHeaderSize - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative" + self AddCq: adjust R: ra1. + self MoveR: ra2 Xbr: ra1 R: rr. + objectRepresentation genConvertIntegerToSmallIntegerInReg: ra2. ]. + } + otherwise: [^EncounteredUnknownBytecode]. + self ssPushRegister: ra2. + ^0! Item was changed: ----- Method: SistaCogitClone>>genBinaryInlineComparison:opFalse:destReg: (in category 'inline primitive generators') ----- genBinaryInlineComparison: opTrue opFalse: opFalse destReg: destReg "Inlined comparison. opTrue = jump for true and opFalse = jump for false" <var: #branchDescriptor type: #'BytecodeDescriptor *'> | nextPC branchDescriptor targetBytecodePC postBranchPC | self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target | branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetBytecodePC := target ]. + - (branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifTrue: "This is the path where the inlined comparison is followed immediately by a branch" [ (self fixupAt: nextPC) notAFixup ifTrue: "The next instruction is dead. we can skip it." [deadCode := true. self ensureFixupAt: targetBytecodePC. self ensureFixupAt: postBranchPC ] ifFalse: [self ssPushConstant: objectMemory trueObject]. "dummy value" self genConditionalBranch: (branchDescriptor isBranchTrue ifTrue: [opTrue] ifFalse: [opFalse]) + operand: (self ensureNonMergeFixupAt: targetBytecodePC) asUnsignedInteger. + "We can only elide the jump if the pc after nextPC is the same as postBranchPC. + Branch following means it may not be." + self nextDescriptorExtensionsAndNextPCInto: + [:iguana1 :iguana2 :iguana3 :followingPC| nextPC := followingPC]. + (deadCode and: [nextPC = postBranchPC]) ifFalse: + [ self Jump: (self ensureNonMergeFixupAt: postBranchPC) ] ] - operand: (self ensureNonMergeFixupAt: targetBytecodePC) asUnsignedInteger. - deadCode ifFalse: [ self Jump: (self ensureNonMergeFixupAt: postBranchPC) ] ] ifFalse: "This is the path where the inlined comparison is *not* followed immediately by a branch" [| condJump jump | condJump := self genConditionalBranch: opTrue operand: 0. self genMoveFalseR: destReg. jump := self Jump: 0. condJump jmpTarget: (self genMoveTrueR: destReg). jump jmpTarget: self Label]. ^ 0! Item was changed: ----- Method: SistaCogitClone>>genBinaryVarOpVarInlinePrimitive: (in category 'inline primitive generators') ----- genBinaryVarOpVarInlinePrimitive: prim "Var op var version of binary inline primitives." "SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution. See EncoderForSistaV1's class comment and StackInterpreter>>#binaryInlinePrimitive:" <option: #SistaVM> | ra rr adjust | self allocateRegForStackTopTwoEntriesInto: [:rTop :rNext | ra := rTop. rr := rNext ]. self ssTop popToReg: ra. self ssPop: 1. self ssTop popToReg: rr. self ssPop: 1. prim caseOf: { "0 through 6, +, -, *, /, //, \\, quo:, SmallInteger op SmallInteger => SmallInteger, no overflow" [0] -> [objectRepresentation genRemoveSmallIntegerTagsInScratchReg: ra. self AddR: ra R: rr]. [1] -> [self SubR: ra R: rr. objectRepresentation genAddSmallIntegerTagsTo: rr]. [2] -> [self genShiftAwaySmallIntegerTagsInScratchReg: rr. self genRemoveSmallIntegerTagsInScratchReg: ra. self MulR: ra R: rr. self genSetSmallIntegerTagsIn: rr]. + "[4] -> []. + [5] -> []. + [6] -> []." "2016 through 2020, bitAnd:, bitOr:, bitXor, bitShiftLeft:, bitShiftRight:, SmallInteger op SmallInteger => SmallInteger, no overflow" [16] -> [ self AndR: ra R: rr ]. [17] -> [ self OrR: ra R: rr ]. [18] -> [objectRepresentation genRemoveSmallIntegerTagsInScratchReg: ra. self XorR: ra R: rr. ]. [19] -> [ objectRepresentation genConvertSmallIntegerToIntegerInReg: ra. objectRepresentation genRemoveSmallIntegerTagsInScratchReg: rr. self LogicalShiftLeftR: ra R: rr. objectRepresentation genAddSmallIntegerTagsTo: rr]. [20] -> [objectRepresentation genConvertSmallIntegerToIntegerInReg: ra. self ArithmeticShiftRightR: ra R: rr. objectRepresentation genClearAndSetSmallIntegerTagsIn: rr.]. "2032 through 2037, >, <, >=, <=. =, ~=, SmallInteger op SmallInteger => Boolean (flags?? then in jump bytecodes if ssTop is a flags value, just generate the instruction!!!!)" "CmpCqR is SubRCq so everything is reversed." [32] -> [ self CmpR: ra R: rr. self genBinaryInlineComparison: JumpGreater opFalse: JumpLessOrEqual destReg: rr ]. [33] -> [ self CmpR: ra R: rr. self genBinaryInlineComparison: JumpLess opFalse: JumpGreaterOrEqual destReg: rr ]. [34] -> [ self CmpR: ra R: rr. self genBinaryInlineComparison: JumpGreaterOrEqual opFalse: JumpLess destReg: rr ]. [35] -> [ self CmpR: ra R: rr. self genBinaryInlineComparison: JumpLessOrEqual opFalse: JumpGreater destReg: rr ]. [36] -> [ self CmpR: ra R: rr. self genBinaryInlineComparison: JumpZero opFalse: JumpNonZero destReg: rr ]. [37] -> [ self CmpR: ra R: rr. self genBinaryInlineComparison: JumpNonZero opFalse: JumpZero destReg: rr ]. "2064 through 2068, Pointer Object>>at:, Byte Object>>at:, Short16 Word Object>>at: LongWord32 Object>>at: Quad64Word Object>>at:. obj op 0-rel SmallInteger => oop" [64] -> [objectRepresentation genConvertSmallIntegerToIntegerInReg: ra. adjust := (objectMemory baseHeaderSize >> objectMemory shiftForWord) - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative" adjust ~= 0 ifTrue: [ self AddCq: adjust R: ra. ]. self MoveXwr: ra R: rr R: rr ]. [65] -> [objectRepresentation genConvertSmallIntegerToIntegerInReg: ra. adjust := objectMemory baseHeaderSize - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative" self AddCq: adjust R: ra. self MoveXbr: ra R: rr R: rr. objectRepresentation genConvertIntegerToSmallIntegerInReg: rr] } otherwise: [^EncounteredUnknownBytecode]. self ssPushRegister: rr. ^0! Item was added: + ----- Method: SistaCogitClone>>genByteEqualsInlinePrimitive: (in category 'inline primitive generators') ----- + genByteEqualsInlinePrimitive: prim + + "3021 Byte Object >> equals:length: + The receiver and the arguments are both byte objects and have both the same size (length in bytes). + The length argument is a smallinteger. + Answers true if all fields are equal, false if not. + Comparison is bulked to word comparison." + + "Overview: + 1. The primitive is called like that: [byteObj1 equals: byteObj2 length: length]. + In the worst case we use 5 registers including TempReg + and we produce a loop bulk comparing words. + 2. The common case is a comparison against a cst: [byteString = 'foo']. + which produces in Scorch [byteString equals: 'foo' length: 3]. + We try to generate fast code for this case with 3 heuristics: + - specific fast code if len is a constant + - unroll the loop if len < 2 * wordSize + - compile-time reads if str1 or str2 is a constant and loop is unrolled. + We use 3 registers including TempReg in the common case. + We could use 1 less reg if the loop is unrolled, the instr is followed by a branch + AND one operand is a constant, but this is complicated enough. + 3. We ignore the case where all operands are constants + (We assume Scorch simplifies it, it works but it is not optimised)" + + | str1Reg str2Reg lenReg extraReg jmp jmp2 needjmpZeroSize needLoop unroll jmpZeroSize instr lenCst mask | + <var: #jmp type: #'AbstractInstruction *'> + <var: #instr type: #'AbstractInstruction *'> + <var: #jmp2 type: #'AbstractInstruction *'> + <var: #jmpZeroSize type: #'AbstractInstruction *'> + + "--- quick path for empty string---" + "This path does not allocate registers and right shift on negative int later in the code. + Normally this is resolved by Scorch but we keep it for correctness and consistency" + self ssTop type = SSConstant ifTrue: + [ lenCst := objectMemory integerValueOf: self ssTop constant. + lenCst = 0 ifTrue: [ self ssPop: 3. self ssPushConstant: objectMemory trueObject. ^ 0 ] ]. + + "--- Allocating & loading registers --- " + needLoop := (self ssTop type = SSConstant and: [ lenCst <= (objectMemory wordSize * 2) ]) not. + unroll := needLoop not and: [lenCst > objectMemory wordSize ]. + needLoop + ifTrue: + [ str1Reg := self allocateRegForStackEntryAt: 1 notConflictingWith: self emptyRegisterMask. + str2Reg := self allocateRegForStackEntryAt: 2 notConflictingWith: (self registerMaskFor: str1Reg). + lenReg := self allocateRegForStackEntryAt: 0 notConflictingWith: (self registerMaskFor:str1Reg and: str2Reg). + (self ssValue: 1) popToReg: str1Reg. + (self ssValue: 2) popToReg: str2Reg. + extraReg := self allocateRegNotConflictingWith: (self registerMaskFor: str1Reg and: str2Reg and: lenReg)] + ifFalse: + [ mask := self emptyRegisterMask. + (self ssValue: 1) type = SSConstant ifFalse: + [ str1Reg := self allocateRegForStackEntryAt: 1 notConflictingWith: mask. + (self ssValue: 1) popToReg: str1Reg. + mask := mask bitOr: (self registerMaskFor: str1Reg) ]. + (self ssValue: 2) type = SSConstant ifFalse: + [ str2Reg := self allocateRegForStackEntryAt: 2 notConflictingWith: mask. + (self ssValue: 2) popToReg: str2Reg. + mask := mask bitOr: (self registerMaskFor: str2Reg) ]. + extraReg := self allocateRegNotConflictingWith: mask]. + + "--- Loading LenReg (or statically resolving it) --- " + "LenReg is loaded with (lenInBytes + objectMemory baseHeaderSize - 1 >> shiftForWord) + LenReg is the index for the last word to compare with MoveXwr:r:R:. + The loop iterates from LenReg to first word of ByteObj" + self ssTop type = SSConstant + ifTrue: "common case, str = 'foo'. We can precompute lenReg." + [ lenCst := lenCst + objectMemory baseHeaderSize - 1 >> objectMemory shiftForWord. + needLoop ifTrue: [self MoveCq: lenCst R: lenReg ]. + needjmpZeroSize := false] + ifFalse: "uncommon case, str = str2. lenReg in word computed at runtime." + [ self ssTop popToReg: lenReg. + objectRepresentation genConvertSmallIntegerToIntegerInReg: lenReg. + self CmpCq: 0 R: lenReg. + jmpZeroSize := self JumpZero: 0. + needjmpZeroSize := true. + self AddCq: objectMemory baseHeaderSize - 1 R: lenReg. + self ArithmeticShiftRightCq: objectMemory shiftForWord R: lenReg ]. + + "--- Comparing the strings --- " + "LenReg has the index of the last word to read (unless no loop). + We decrement it to adjust -1 (0 in 64 bits) while comparing" + needLoop + ifTrue: + [instr := self MoveXwr: lenReg R: str1Reg R: extraReg. + self MoveXwr: lenReg R: str2Reg R: TempReg. + self CmpR: extraReg R: TempReg. + jmp := self JumpNonZero: 0. "then string are not equal (jmp target)" + self AddCq: -1 R: lenReg. + self CmpCq: (objectMemory baseHeaderSize >> objectMemory shiftForWord) - 1 R: lenReg. "first word of ByteObj, stop looping." + self JumpNonZero: instr] + ifFalse: "Common case, only 1 or 2 word to check: no lenReg allocation, cst micro optimisations" + [self genByteEqualsInlinePrimitiveCmp: str1Reg with: str2Reg scratch1: extraReg scratch2: TempReg field: 0. + jmp := self JumpNonZero: 0. "then string are not equal (jmp target)" + unroll ifTrue: "unrolling more than twice generate more instructions than the loop so we don't do it" + [self genByteEqualsInlinePrimitiveCmp: str1Reg with: str2Reg scratch1: extraReg scratch2: TempReg field: 1. + jmp2 := self JumpNonZero: 0. "then string are not equal (jmp target)"]]. + needjmpZeroSize ifTrue: [ jmpZeroSize jmpTarget: self Label ]. + "fall through, strings are equal" + + "--- Pushing the result or pipelining a branch --- " + self ssPop: 3. + self genByteEqualsInlinePrimitiveResult: jmp returnReg: extraReg. + unroll ifTrue: [jmp2 jmpTarget: jmp getJmpTarget]. + ^0! Item was added: + ----- Method: SistaCogitClone>>genByteEqualsInlinePrimitiveCmp:with:scratch1:scratch2:field: (in category 'inline primitive generators') ----- + genByteEqualsInlinePrimitiveCmp: str1Reg with: str2Reg scratch1: scratch1Reg scratch2: scratch2Reg field: index + | shift | + <inline: true> + shift := objectMemory baseHeaderSize + (index * objectMemory wordSize). + (self ssValue: 1) type = SSConstant + ifTrue: [self MoveCq: (objectMemory fetchPointer: index ofObject: (self ssValue: 1) constant) R: scratch1Reg] + ifFalse: [self MoveMw: shift r: str1Reg R: scratch1Reg]. + (self ssValue: 2) type = SSConstant + ifTrue: [self MoveCq: (objectMemory fetchPointer: index ofObject: (self ssValue: 2) constant) R: scratch2Reg] + ifFalse: [self MoveMw: shift r: str2Reg R: scratch2Reg]. + self CmpR: scratch1Reg R: scratch2Reg.! Item was added: + ----- Method: SistaCogitClone>>genByteEqualsInlinePrimitiveResult:returnReg: (in category 'inline primitive generators') ----- + genByteEqualsInlinePrimitiveResult: jmp returnReg: reg + "Byte equal is falling through if the result is true, or jumping using jmp if the result is false. + The method is required to set the jump target of jmp. + We look ahead for a branch and pipeline the jumps if possible.. + ReturnReg is used only if not followed immediately by a branch." + | branchDescriptor nextPC postBranchPC targetBytecodePC localJump canElide | + <var: #localJump type: #'AbstractInstruction *'> + <var: #branchDescriptor type: #'BytecodeDescriptor *'> + self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target | + branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetBytecodePC := target ]. + + "Case 1 - not followed by a branch" + (branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) + ifFalse: + [self genMoveTrueR: reg. + localJump := self Jump: 0. + jmp jmpTarget: (self genMoveFalseR: reg). + localJump jmpTarget: self Label. + self ssPushRegister: reg. + ^ 0]. + + "Case 2 - followed by a branch" + (self fixupAt: nextPC) notAFixup + ifTrue: "The next instruction is dead. we can skip it." + [deadCode := true. + self ensureFixupAt: targetBytecodePC. + self ensureFixupAt: postBranchPC ] + ifFalse: + [self ssPushConstant: objectMemory trueObject]. "dummy value" + "We can only elide the jump if the pc after nextPC is the same as postBranchPC. + Branch following means it may not be." + self nextDescriptorExtensionsAndNextPCInto: + [:iguana1 :iguana2 :iguana3 :followingPC| nextPC := followingPC]. + canElide := deadCode and: [nextPC = postBranchPC]. + branchDescriptor isBranchTrue + ifTrue: + [ self Jump: (self ensureNonMergeFixupAt: targetBytecodePC). + canElide + ifFalse: [ jmp jmpTarget: (self ensureNonMergeFixupAt: postBranchPC) ] + ifTrue: [ jmp jmpTarget: self Label ] ] + ifFalse: [ canElide ifFalse: [ self Jump: (self ensureNonMergeFixupAt: postBranchPC). + jmp jmpTarget: (self ensureNonMergeFixupAt: targetBytecodePC) ] ]. + ^0! Item was changed: ----- Method: SistaCogitClone>>genCallPrimitiveBytecode (in category 'bytecode generators') ----- genCallPrimitiveBytecode "SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution. See EncoderForSistaV1's class comment and StackInterpreter>>#inlinePrimitiveBytecode:" + | prim primSet | - | prim | byte2 < 128 ifTrue: [^bytecodePC = initialPC ifTrue: [0] ifFalse: [EncounteredUnknownBytecode]]. prim := byte2 - 128 << 8 + byte1. + primSet := prim >> 13 bitAnd: 3. + prim := prim bitAnd: 8191. + LowcodeVM + ifTrue: + [ + primSet = 1 ifTrue: [ + prim < 1000 ifTrue: + [^self genLowcodeNullaryInlinePrimitive: prim]. + prim < 2000 ifTrue: + [^self genLowcodeUnaryInlinePrimitive: prim - 1000]. + + prim < 3000 ifTrue: + [^ self genLowcodeBinaryInlinePrimitive: prim - 2000]. + + prim < 4000 ifTrue: + [^self genLowcodeTrinaryInlinePrimitive: prim - 3000]. + ] + ]. + + self assert: primSet = 0. + prim < 1000 ifTrue: [^self genNullaryInlinePrimitive: prim]. prim < 2000 ifTrue: [^self genUnaryInlinePrimitive: prim - 1000]. prim < 3000 ifTrue: [self ssTop type = SSConstant ifTrue: [^self genBinaryVarOpConstInlinePrimitive: prim - 2000]. (self ssValue: 1) type = SSConstant ifTrue: [^self genBinaryConstOpVarInlinePrimitive: prim - 2000]. ^self genBinaryVarOpVarInlinePrimitive: prim - 2000]. prim < 4000 ifTrue: [^self genTrinaryInlinePrimitive: prim - 3000]. + + prim < 5000 ifTrue: + [^self genQuaternaryInlinePrimitive: prim - 4000]. + + prim < 6000 ifTrue: + [^self genQuinaryInlinePrimitive: prim - 5000]. + - ^EncounteredUnknownBytecode! Item was removed: - ----- Method: SistaCogitClone>>genExtJumpIfNotInstanceOfBehaviorsBytecode (in category 'bytecode generators') ----- - genExtJumpIfNotInstanceOfBehaviorsBytecode - "SistaV1: * 254 11111110 kkkkkkkk jjjjjjjj branch If Not Instance Of Behavior/Array Of Behavior kkkkkkkk (+ Extend A * 256, where Extend A >= 0) distance jjjjjjjj (+ Extend B * 256, where Extend B >= 0)" - | reg literal distance targetFixUp inverse | - - "We lose the information of in which register is stack top - when jitting the branch target so we need to flush everything. - We could use a fixed register here...." - reg := self allocateRegForStackEntryAt: 0. - self ssTop popToReg: reg. - self ssFlushTo: simStackPtr. "flushed but the value is still in reg" - self ssPop: 1. - - literal := self getLiteral: (extA * 256 + byte1). - (inverse := extB < 0) ifTrue: - [extB := extB + 128]. - distance := extB * 256 + byte2. - extA := extB := numExtB := 0. - - targetFixUp := self cCoerceSimple: (self ensureFixupAt: bytecodePC + 3 + distance) to: #'AbstractInstruction *'. - inverse - ifFalse: - [(objectMemory isArrayNonImm: literal) - ifTrue: [objectRepresentation branchIf: reg notInstanceOfBehaviors: literal target: targetFixUp] - ifFalse: [objectRepresentation branchIf: reg notInstanceOfBehavior: literal target: targetFixUp] ] - ifTrue: - [(objectMemory isArrayNonImm: literal) - ifTrue: [objectRepresentation branchIf: reg instanceOfBehaviors: literal target: targetFixUp] - ifFalse: [objectRepresentation branchIf: reg instanceOfBehavior: literal target: targetFixUp]]. - - ^0! Item was added: + ----- Method: SistaCogitClone>>genQuaternaryInlinePrimitive: (in category 'inline primitive generators') ----- + genQuaternaryInlinePrimitive: prim + "Quaternary inline primitives." + "SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution. + See EncoderForSistaV1's class comment and StackInterpreter>>#quaternaryInlinePrimitive:" + | needStoreCheck sourceReg stopReg objReg adjust jmp cmp isStartCst isStopCst startCst stopCst iteratorReg | + <var: #jmp type: #'AbstractInstruction *'> + <var: #cmp type: #'AbstractInstruction *'> + prim = 0 ifFalse: [^EncounteredUnknownBytecode]. + + "4000 Pointer Object>> fillFrom:to:with: The receiver is a Pointer object. the middle two arguments are smallintegers. Last argument is any object. Fills the object in between the two indexes with last argument. Receiver is guaranteed to be mutable. The pointer accesses are raw (no inst var check). If ExtB is set to 1, no store check is present. Else a single store check is done for the bulk operation. Answers the receiver." + needStoreCheck := self sistaNeedsStoreCheck. + extB := numExtB := 0. + + "Allocate reg for src, objToStore, iterator and stop." + sourceReg := needStoreCheck + ifTrue: [ self ssAllocateRequiredReg: ReceiverResultReg. + self voidReceiverResultRegContainsSelf. + ReceiverResultReg ] + ifFalse: [ self allocateRegForStackEntryAt: 3 notConflictingWith: self emptyRegisterMask ]. + (self ssValue: 3) popToReg: sourceReg. + objReg := self allocateRegForStackEntryAt: 0 notConflictingWith: (self registerMaskFor: sourceReg). + self ssTop popToReg: objReg. + + "Set up iterator to first index to write and stop to last index to write" + adjust := (objectMemory baseHeaderSize >> objectMemory shiftForWord) - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative" + isStartCst := (self ssValue: 2) type = SSConstant. + isStopCst := (self ssValue: 1) type = SSConstant. + isStartCst ifTrue: [startCst := adjust + (objectMemory integerValueOf: (self ssValue: 2) constant)]. + isStopCst ifTrue: [stopCst := adjust + (objectMemory integerValueOf: (self ssValue: 1) constant)]. + + (isStartCst + and: [isStopCst + and: [stopCst - startCst < 7 ]]) "The other path generates at least 7 instructions" + ifTrue: ["unroll" + startCst + to: stopCst + do: [ :i | self MoveMw: i r: sourceReg R: objReg ] ] + ifFalse: ["loop" + stopReg := self allocateRegNotConflictingWith: (self registerMaskFor: sourceReg and: objReg). + iteratorReg := self allocateRegNotConflictingWith: (self registerMaskFor: sourceReg and: objReg and: stopReg). + isStartCst + ifTrue: [ self MoveCq: startCst R: iteratorReg ] + ifFalse: [ (self ssValue: 2) popToReg: iteratorReg. + adjust ~= 0 ifTrue: [ self AddCq: adjust R: iteratorReg ] ]. + isStopCst + ifTrue: [ self MoveCq: stopCst R: stopReg ] + ifFalse: [ (self ssValue: 1) popToReg: stopReg. + adjust ~= 0 ifTrue: [ self AddCq: adjust R: stopReg ] ]. + cmp := self CmpR: stopReg R: iteratorReg. + jmp := self JumpAbove: 0. + self MoveR: objReg Xwr: iteratorReg R: sourceReg. + self AddCq: 1 R: iteratorReg. + self Jump: cmp. + jmp jmpTarget: self Label]. + + needStoreCheck ifTrue: [objectRepresentation genStoreCheckReceiverReg: sourceReg valueReg: objReg scratchReg: TempReg inFrame: true]. + + self ssPop: 4. + self ssPushRegister: sourceReg. + ^0! Item was added: + ----- Method: SistaCogitClone>>genQuinaryInlinePrimitive: (in category 'inline primitive generators') ----- + genQuinaryInlinePrimitive: prim + "SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution. + See EncoderForSistaV1's class comment and StackInterpreter>>#quaternaryInlinePrimitive:" + ^EncounteredUnknownBytecode! Item was changed: ----- Method: SistaCogitClone>>genTrinaryInlinePrimitive: (in category 'inline primitive generators') ----- genTrinaryInlinePrimitive: prim + "trinary inline primitives." - "Unary inline primitives." "SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution. See EncoderForSistaV1's class comment and StackInterpreter>>#trinaryInlinePrimitive:" + + prim < 10 ifTrue: [^ self genAtPutInlinePrimitive: prim]. + prim = 21 ifTrue: [^ self genByteEqualsInlinePrimitive: prim]. + ^ EncounteredUnknownBytecode! - | ra1 ra2 rr adjust needsStoreCheck | - "The store check requires rr to be ReceiverResultReg" - needsStoreCheck := (objectRepresentation isUnannotatableConstant: self ssTop) not. - self - allocateRegForStackTopThreeEntriesInto: [:rTop :rNext :rThird | ra2 := rTop. ra1 := rNext. rr := rThird ] - thirdIsReceiver: (prim = 0 and: [ needsStoreCheck ]). - self assert: (rr ~= ra1 and: [rr ~= ra2 and: [ra1 ~= ra2]]). - self ssTop popToReg: ra2. - self ssPop: 1. - self ssTop popToReg: ra1. - self ssPop: 1. - self ssTop popToReg: rr. - self ssPop: 1. - objectRepresentation genConvertSmallIntegerToIntegerInReg: ra1. - "Now: ra is the variable object, rr is long, TempReg holds the value to store." - self flag: #TODO. "This is not really working as the immutability and store check needs to be present. " - prim caseOf: { - "0 - 1 pointerAt:put: and byteAt:Put:" - [0] -> [ adjust := (objectMemory baseHeaderSize >> objectMemory shiftForWord) - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative" - adjust ~= 0 ifTrue: [ self AddCq: adjust R: ra1. ]. - self MoveR: ra2 Xwr: ra1 R: rr. - "I added needsStoreCheck so if you initialize an array with a Smi such as 0 or a boolean you don't need the store check" - needsStoreCheck ifTrue: - [ self assert: needsFrame. - objectRepresentation genStoreCheckReceiverReg: rr valueReg: ra2 scratchReg: TempReg inFrame: true] ]. - [1] -> [ objectRepresentation genConvertSmallIntegerToIntegerInReg: ra2. - adjust := objectMemory baseHeaderSize - 1. "shift by baseHeaderSize and then move from 1 relative to zero relative" - self AddCq: adjust R: ra1. - self MoveR: ra2 Xbr: ra1 R: rr. - objectRepresentation genConvertIntegerToSmallIntegerInReg: ra2. ] - } - otherwise: [^EncounteredUnknownBytecode]. - self ssPushRegister: ra2. - ^0! Item was changed: ----- Method: SistaCogitClone>>genUnaryInlinePrimitive: (in category 'inline primitive generators') ----- genUnaryInlinePrimitive: prim "Unary inline primitives." "SistaV1: 248 11111000 iiiiiiii mjjjjjjj Call Primitive #iiiiiiii + (jjjjjjj * 256) m=1 means inlined primitive, no hard return after execution. See EncoderForSistaV1's class comment and StackInterpreter>>#unaryInlinePrimitive:" | rcvrReg resultReg | rcvrReg := self allocateRegForStackEntryAt: 0. resultReg := self allocateRegNotConflictingWith: (self registerMaskFor: rcvrReg). prim caseOf: { "00 unchecked class" [1] -> "01 unchecked pointer numSlots" [self ssTop popToReg: rcvrReg. self ssPop: 1. objectRepresentation genGetNumSlotsOf: rcvrReg into: resultReg; genConvertIntegerToSmallIntegerInReg: resultReg]. "02 unchecked pointer basicSize" [3] -> "03 unchecked byte numBytes" [self ssTop popToReg: rcvrReg. self ssPop: 1. objectRepresentation genGetNumBytesOf: rcvrReg into: resultReg; genConvertIntegerToSmallIntegerInReg: resultReg]. "04 unchecked short16Type format numShorts" "05 unchecked word32Type format numWords" "06 unchecked doubleWord64Type format numDoubleWords" [11] -> "11 unchecked fixed pointer basicNew" [self ssTop type ~= SSConstant ifTrue: [^EncounteredUnknownBytecode]. (objectRepresentation genGetInstanceOf: self ssTop constant into: resultReg initializingIf: self extBSpecifiesInitializeInstance) ~= 0 ifTrue: [^ShouldNotJIT]. "e.g. bad class" self ssPop: 1] . [20] -> "20 identityHash" + [objectRepresentation genGetIdentityHash: rcvrReg resultReg: resultReg. - [self ssTop popToReg: rcvrReg. - objectRepresentation genGetHashFieldNonImmOf: rcvrReg asSmallIntegerInto: resultReg. self ssPop: 1] . "21 identityHash (SmallInteger)" "22 identityHash (Character)" "23 identityHash (SmallFloat64)" "24 identityHash (Behavior)" "30 immediateAsInteger (Character) + 31 immediateAsInteger (SmallFloat64) + 35 immediateAsFloat (SmallInteger) " - 31 immediateAsInteger (SmallFloat64)" [30] -> + [self ssTop popToReg: resultReg. + objectRepresentation genConvertCharacterToSmallIntegerInReg: resultReg. + self ssPop: 1]. + [35] -> + [self assert: self processorHasDoublePrecisionFloatingPointSupport. + self MoveR: rcvrReg R: TempReg. + self genConvertSmallIntegerToIntegerInReg: TempReg. + self ConvertR: TempReg Rd: DPFPReg0. + self flag: #TODO. "Should never fail" + self + genAllocFloatValue: DPFPReg0 + into: resultReg + scratchReg: TempReg + scratchReg: NoReg. "scratch2 for V3 only"] - [self ssTop popToReg: rcvrReg. - objectRepresentation genConvertCharacterToSmallIntegerInReg: rcvrReg. - self ssPop: 1] } otherwise: [^EncounteredUnknownBytecode]. extB := 0. numExtB := 0. self ssPushRegister: resultReg. ^0! Item was changed: ----- Method: SistaCogitClone>>genUnconditionalTrapBytecode (in category 'bytecode generators') ----- genUnconditionalTrapBytecode "SistaV1: * 217 Trap" self ssFlushTo: simStackPtr. self CallRT: ceTrapTrampoline. self annotateBytecode: self Label. + deadCode := true. ^0! Item was added: + ----- Method: SistaCogitClone>>isTrapAt: (in category 'simulation only') ----- + isTrapAt: retpc + "For stack depth checking." + <doNotGenerate> + ^(backEnd isCallPrecedingReturnPC: retpc) + and: [(backEnd callTargetFromReturnAddress: retpc) = ceTrapTrampoline]! Item was added: + ----- Method: SistaCogitClone>>setCogCodeZoneThreshold: (in category 'accessing') ----- + setCogCodeZoneThreshold: threshold + <doNotGenerate> + ^methodZone setCogCodeZoneThreshold: threshold! Item was changed: ----- Method: SistaRegisterAllocatingCogit>>genSpecialSelectorComparison (in category 'bytecode generators') ----- genSpecialSelectorComparison "Override to count inlined branches if followed by a conditional branch. We borrow the following conditional branch's counter and when about to inline the comparison we decrement the counter (without writing it back) and if it trips simply abort the inlining, falling back to the normal send which will then continue to the conditional branch which will trip and enter the abort." | nextPC postBranchPC targetPC primDescriptor branchDescriptor rcvrIsInt rcvrIsConst argIsIntConst argInt jumpNotSmallInts inlineCAB + counterAddress countTripped counterReg index rcvrReg argReg + branchToTarget needMergeToContinue needMergeToTarget | - counterAddress countTripped counterReg index rcvrReg argReg | <var: #countTripped type: #'AbstractInstruction *'> <var: #primDescriptor type: #'BytecodeDescriptor *'> <var: #jumpNotSmallInts type: #'AbstractInstruction *'> <var: #branchDescriptor type: #'BytecodeDescriptor *'> (coInterpreter isOptimizedMethod: methodObj) ifTrue: [^self genSpecialSelectorComparisonWithoutCounters]. primDescriptor := self generatorAt: byte0. argIsIntConst := self ssTop type = SSConstant and: [objectMemory isIntegerObject: (argInt := self ssTop constant)]. rcvrIsInt := ((rcvrIsConst := (self ssValue: 1) type = SSConstant) and: [objectMemory isIntegerObject: (self ssValue: 1) constant]) or: [self mclassIsSmallInteger and: [(self ssValue: 1) isSameEntryAs: (self addressOf: simSelf)]]. "short-cut the jump if operands are SmallInteger constants." (argIsIntConst and: [rcvrIsInt and: [rcvrIsConst]]) ifTrue: [^ self genStaticallyResolvedSpecialSelectorComparison]. self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target | branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetPC := target ]. + - "Only interested in inlining if followed by a conditional branch." inlineCAB := branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]. "Further, only interested in inlining = and ~= if there's a SmallInteger constant involved. The relational operators successfully statically predict SmallIntegers; the equality operators do not." (inlineCAB and: [primDescriptor opcode = JumpZero or: [primDescriptor opcode = JumpNonZero]]) ifTrue: [inlineCAB := argIsIntConst or: [rcvrIsInt]]. inlineCAB ifFalse: [^self genSpecialSelectorSend]. "In-line the comparison and the jump, but if the types are not SmallInteger then we will need to do a send and fall through to the following conditional branch. Since we're allocating values in registers we would like to keep those registers live on the inlined path and reload registers along the non-inlined send path. The merge logic at the branch destinations handles this." argIsIntConst ifTrue: [rcvrReg := self allocateRegForStackEntryAt: 1. (self ssValue: 1) popToReg: rcvrReg. + counterReg := self allocateRegNotConflictingWith: (self registerMaskFor: rcvrReg). + argReg := NoReg] - counterReg := self allocateRegNotConflictingWith: (self registerMaskFor: rcvrReg)] ifFalse: [self allocateRegForStackTopTwoEntriesInto: [:rTop :rNext| argReg := rTop. rcvrReg := rNext]. - rcvrReg = Arg0Reg ifTrue: - [rcvrReg := argReg. argReg := Arg0Reg]. self ssTop popToReg: argReg. (self ssValue: 1) popToReg: rcvrReg. counterReg := self allocateRegNotConflictingWith: (self registerMaskFor: rcvrReg and: argReg)]. + self ssPop: 2. jumpNotSmallInts := (rcvrIsInt and: [argIsIntConst]) ifFalse: [argIsIntConst + ifTrue: [objectRepresentation genJumpNotSmallInteger: rcvrReg] - ifTrue: [objectRepresentation genJumpNotSmallInteger: ReceiverResultReg] ifFalse: [rcvrIsInt + ifTrue: [objectRepresentation genJumpNotSmallInteger: argReg] + ifFalse: [objectRepresentation genJumpNotSmallIntegersIn: rcvrReg and: argReg scratch: TempReg]]]. - ifTrue: [objectRepresentation genJumpNotSmallInteger: Arg0Reg] - ifFalse: [objectRepresentation genJumpNotSmallIntegersIn: ReceiverResultReg and: Arg0Reg scratch: TempReg]]]. self genExecutionCountLogicInto: [ :cAddress :countTripBranch | counterAddress := cAddress. countTripped := countTripBranch ] counterReg: counterReg. argIsIntConst ifTrue: [self CmpCq: argInt R: rcvrReg] ifFalse: [self CmpR: argReg R: rcvrReg]. + "self printSimStack; printSimStack: (self fixupAt: postBranchPC) mergeSimStack; printSimStack: (self fixupAt: targetPC) mergeSimStack" + "If there are merges to be performed on the forward branches we have to execute + the merge code only along the path requiring that merge, and exactly once." + needMergeToTarget := self mergeRequiredForJumpTo: targetPC. + needMergeToContinue := self mergeRequiredForJumpTo: postBranchPC. + "Cmp is weird/backwards so invert the comparison." + (needMergeToTarget and: [needMergeToContinue]) ifTrue: + [branchToTarget := self genConditionalBranch: (branchDescriptor isBranchTrue + ifTrue: [primDescriptor opcode] + ifFalse: [self inverseBranchFor: primDescriptor opcode]) + operand: 0. + self Jump: (self ensureFixupAt: postBranchPC). + branchToTarget jmpTarget: self Label. + self Jump: (self ensureFixupAt: targetPC)]. + (needMergeToTarget and: [needMergeToContinue not]) ifTrue: + [self genConditionalBranch: (branchDescriptor isBranchFalse + ifTrue: [primDescriptor opcode] + ifFalse: [self inverseBranchFor: primDescriptor opcode]) + operand: (self ensureFixupAt: postBranchPC) asUnsignedInteger. + self Jump: (self ensureFixupAt: targetPC)]. + (needMergeToTarget not and: [needMergeToContinue]) ifTrue: + [self genConditionalBranch: (branchDescriptor isBranchTrue + ifTrue: [primDescriptor opcode] + ifFalse: [self inverseBranchFor: primDescriptor opcode]) + operand: (self ensureFixupAt: targetPC) asUnsignedInteger. + self Jump: (self ensureFixupAt: postBranchPC)]. + (needMergeToTarget or: [needMergeToContinue]) ifFalse: + [self genConditionalBranch: (branchDescriptor isBranchTrue + ifTrue: [primDescriptor opcode] + ifFalse: [self inverseBranchFor: primDescriptor opcode]) + operand: (self ensureFixupAt: targetPC) asUnsignedInteger. + self Jump: (self ensureFixupAt: postBranchPC)]. + - "Cmp is weird/backwards so invert the comparison. Further since there is a following conditional - jump bytecode define non-merge fixups and leave the cond bytecode to set the mergeness." - self genConditionalBranch: (branchDescriptor isBranchTrue - ifTrue: [primDescriptor opcode] - ifFalse: [self inverseBranchFor: primDescriptor opcode]) - operand: (self ensureFixupAt: targetPC) asUnsignedInteger. - - self genFallsThroughCountLogicCounterReg: counterReg counterAddress: counterAddress. - - self Jump: (self ensureFixupAt: postBranchPC). countTripped jmpTarget: self Label. jumpNotSmallInts ifNil: [self annotateInstructionForBytecode. deadCode := true. ^0]. jumpNotSmallInts jmpTarget: countTripped getJmpTarget. self ssFlushTo: simStackPtr. + rcvrReg = Arg0Reg + ifTrue: + [argReg = ReceiverResultReg + ifTrue: [self SwapR: Arg0Reg R: Arg0Reg Scratch: TempReg. argReg := Arg0Reg] + ifFalse: [self MoveR: rcvrReg R: ReceiverResultReg]. + rcvrReg := ReceiverResultReg]. - self deny: rcvrReg = Arg0Reg. argIsIntConst ifTrue: [self MoveCq: argInt R: Arg0Reg] ifFalse: [argReg ~= Arg0Reg ifTrue: [self MoveR: argReg R: Arg0Reg]]. rcvrReg ~= ReceiverResultReg ifTrue: [self MoveR: rcvrReg R: ReceiverResultReg]. index := byte0 - self firstSpecialSelectorBytecodeOffset. ^self genMarshalledSend: index negated - 1 numArgs: 1 sendTable: ordinarySendTrampolines! Item was changed: ----- Method: StackToRegisterMappingCogit>>assertCorrectSimStackPtr (in category 'compile abstract instructions') ----- assertCorrectSimStackPtr <inline: true> "generates nothing anyway" self cCode: '' inSmalltalk: [deadCode ifFalse: [self assert: simStackPtr + (needsFrame ifTrue: [0] ifFalse: [1]) + = (self debugStackPointerFor: bytecodePC)]. + self deny: self duplicateRegisterAssignmentsInTemporaries]. - = (self debugStackPointerFor: bytecodePC)]]. ! |
Free forum by Nabble | Edit this page |