VM Maker: VMMaker.oscog-eem.2558.mcz

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

VM Maker: VMMaker.oscog-eem.2558.mcz

commits-2
 
Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.2558.mcz

==================== Summary ====================

Name: VMMaker.oscog-eem.2558
Author: eem
Time: 10 September 2019, 11:46:35.172918 am
UUID: ecd4b81e-cfaa-4167-a8ad-c2ebbb4e460b
Ancestors: VMMaker.oscog-nice.2557

Reimplement Spur JIT access to literals and literal variables in #==, #~~ and push/pop/store literal variable.  Instead of having a run-time read barrier in JITTED code, add a flag to CogMethod recordiong if a method references a movable (becommable) literal, and scanning all so flagged methods post-become to follow becommed literals.

This reduces code size by about 2.2%.  Performan ce increase yet to b e assessed, but it should be better than 2.2% (less code, but more compact code means more methods in the method zone and a more compact working set). Doing so means we don't have to add all methods to young referrers if OldBecameNew.  Instead we need add only those with movable literals which after scanning end up with a new literal.  This scheme works well for 64-bits where sdelecftors are not directly referenced; instead ithe inline cache (because it is only 32-bits) contains the literal index of the selector.

Slang:
Fix an initialization bug for classes referred to in option: pragmas but not included.  Make sure that all such classes are included in InitializationOptions as false before initializing.

Fix checkGenerateSurrogate:bytesPerWord: when checking for a new method.  The existing code assumed no new methods were being adeed and so crashed when the cmHasMovableLiteral accessors were generated and checked for.

=============== Diff against VMMaker.oscog-nice.2557 ===============

Item was changed:
  ----- Method: CoInterpreter>>followForwardedFieldsInCurrentMethod (in category 'message sending') -----
  followForwardedFieldsInCurrentMethod
+ | cogMethod wasInYoungReferrers |
- | cogMethod |
  <var: #cogMethod type: #'CogMethod *'>
  <inline: false>
  (self isMachineCodeFrame: framePointer)
  ifTrue:
  [cogMethod := self mframeHomeMethod: framePointer.
  objectMemory
  followForwardedObjectFields: cogMethod methodObject
  toDepth: 0.
+ wasInYoungReferrers := cogMethod cmRefersToYoung.
+ cogit followForwardedLiteralsIn: cogMethod.
+ (wasInYoungReferrers and: [cogMethod cmRefersToYoung not]) ifTrue:
+ [cogit pruneYoungReferrers]]
- cogit followForwardedLiteralsIn: cogMethod]
  ifFalse:
  [objectMemory
  followForwardedObjectFields: method
  toDepth: 0]!

Item was changed:
  ----- Method: CoInterpreter>>postBecomeAction: (in category 'object memory support') -----
  postBecomeAction: theBecomeEffectsFlags
  "Clear the gcMode var and let the Cogit do its post GC checks."
+ self break.
  super postBecomeAction: theBecomeEffectsFlags.
 
+ (objectMemory hasSpurMemoryManagerAPI)
+ ifTrue: [cogit followMovableLiteralsAndUpdateYoungReferrers]
+ ifFalse: [cogit cogitPostGCAction: gcMode].
- (objectMemory hasSpurMemoryManagerAPI
- and: [theBecomeEffectsFlags anyMask: OldBecameNewFlag]) ifTrue:
- [cogit addAllToYoungReferrers].
- cogit cogitPostGCAction: gcMode.
  self nilUncoggableMethods.
+ self assert: cogit kosherYoungReferrers.
  gcMode := 0!

Item was changed:
  ----- Method: CoInterpreterPrimitives>>primitiveObjectAtPut (in category 'object access primitives') -----
  primitiveObjectAtPut
+ "Store a literal into a CompiledMethod at the given index. Defined for CompiledMethods only.
+ We assume that if the user is using this on active code then they will use primitiveVoidVMStateForMethod
+ to discard the machine code as required."
- "Store a literal into a CompiledMethod at the given index. Defined for CompiledMethods only."
  | thisReceiver rawHeader realHeader index newValue |
  newValue := self stackValue: 0.
  index := self stackValue: 1.
  (objectMemory isNonIntegerObject: index) ifTrue:
  [^self primitiveFailFor: PrimErrBadArgument].
  index := objectMemory integerValueOf: index.
  thisReceiver := self stackValue: 2.
  (objectMemory isObjImmutable: thisReceiver) ifTrue:
  [^self primitiveFailFor: PrimErrNoModification].
  rawHeader := self rawHeaderOf: thisReceiver.
  realHeader := (self isCogMethodReference: rawHeader)
  ifTrue: [(self cCoerceSimple: rawHeader to: #'CogMethod *') methodHeader]
  ifFalse: [rawHeader].
  (index > 0
  and: [index <= ((objectMemory literalCountOfMethodHeader: realHeader) + LiteralStart)]) ifFalse:
  [^self primitiveFailFor: PrimErrBadIndex].
  index = 1
  ifTrue:
  [((objectMemory isNonIntegerObject: newValue)
  or: [(objectMemory literalCountOfMethodHeader: newValue) ~= (objectMemory literalCountOfMethodHeader: realHeader)]) ifTrue:
  [^self primitiveFailFor: PrimErrBadArgument].
  (self isCogMethodReference: rawHeader)
  ifTrue: [(self cCoerceSimple: rawHeader to: #'CogMethod *') methodHeader: newValue]
  ifFalse: [objectMemory storePointerUnchecked: 0 ofObject: thisReceiver withValue: newValue]]
  ifFalse:
  [objectMemory storePointer: index - 1 ofObject: thisReceiver withValue: newValue].
  self pop: 3 thenPush: newValue!

Item was changed:
  VMStructType subclass: #CogBlockMethod
+ instanceVariableNames: 'objectHeader homeOffset startpc padToWord cmNumArgs cmType cmRefersToYoung cpicHasMNUCaseOrCMIsFullBlock cmUsageCount cmUsesPenultimateLit cbUsesInstVars cmHasMovableLiteral cmUnusedFlag stackCheckOffset'
- instanceVariableNames: 'objectHeader homeOffset startpc padToWord cmNumArgs cmType cmRefersToYoung cpicHasMNUCaseOrCMIsFullBlock cmUsageCount cmUsesPenultimateLit cbUsesInstVars cmUnusedFlags stackCheckOffset'
  classVariableNames: ''
  poolDictionaries: 'CogMethodConstants VMBasicConstants VMBytecodeConstants'
  category: 'VMMaker-JIT'!
 
+ !CogBlockMethod commentStamp: 'eem 9/9/2019 15:39' prior: 0!
- !CogBlockMethod commentStamp: 'eem 4/14/2016 10:39' prior: 0!
  I am the rump method header for a block method embedded in a full CogMethod.  I am the superclass of CogMethod, which is a Cog method header proper.  Instances of both classes have the same second word.  The homeOffset and startpc fields are overlaid on the objectHeader in a CogMethod.  See Cogit class>>structureOfACogMethod for more information.  In C I look like
 
  typedef struct {
  union {
  struct {
  unsigned short homeOffset;
  unsigned short startpc;
  #if SpurVM
  unsigned int padToWord;
  #endif
  };
  sqInt/sqLong objectHeader;
  };
  unsigned cmNumArgs : 8;
  unsigned cmType : 3;
  unsigned cmRefersToYoung : 1;
  unsigned cpicHasMNUCaseOrCMIsFullBlock : 1;
  unsigned cmUsageCount : 3;
  unsigned cmUsesPenultimateLit : 1;
  unsigned cbUsesInstVars : 1;
+ unsigned cmHasMovableLiteral : 1;
+ unsigned cmUnusedFlag : 1;
- unsigned cmUnusedFlags : 2;
  unsigned stackCheckOffset : 12;
  } CogBlockMethod;
 
  My instances are not actually used.  The methods exist only as input to Slang.  The simulator uses my surrogates (CogBlockMethodSurrogate32 and CogBlockMethodSurrogate64) to reference CogBlockMethod and CogMethod structures in the code zone.  The start of the structure is 32-bits in the V3 memory manager and 64-bits in the Spour memory manager.  In a CMMethod these bits are set to the object header of a marked bits objects, allowing code to masquerade as objects when referred to from the first field of a CompiledMethod.  In a CMBlock, they hold the homeOffset and the startpc.
 
  cbUsesInstVars
+ - a flag set to true in blocks that refer to instance variables
- - a flag set to true in blocks that refer to instance variables.
 
+ cmHasMovableLiteral
+ - a flag set to true in methods that refer to movable literals.  This allows Spur to follow literals post become and hence avoid having to follow literals in #==, #~~, and push/pop/storeLiteralVariable.
+
  cmNumArgs
  - the byte containing the block or method arg count
 
  cmRefersToYoung
  - a flag set to true in methods which contain a reference to an object in new space
 
  cmType
  - one of CMFree, CMMethod, CMBlock, CMClosedPIC, CMOpenPIC
 
  cmUnusedFlags
  - as yet unused bits
 
  cmUsageCount
  - a count used to identify older methods in code compaction.  The count decays over time, and compaction frees methods with lower usage counts
 
  cmUsesPenultimateLit
  - a flag that states whether the penultimate literal in the corresponding bytecode method is used.  This in turn is used to check that a become of a method does not alter its bytecode.
 
  cpicHasMNUCaseOrCMIsFullBlock
  - a flag that states whether a CMClosedPIC contains one or more MNU cases which are PIC dispatches used to speed-up MNU processing,
   or states whether a CMMethod is for a full block instead of for a compiled method.
 
  homeOffset
  - the distance a CMBlock header is away from its enclosing CMMethod header
 
  objectHeader
  - an object header used to fool the garbage collector into thinking that a CMMethod is a normal bits object, so that the first field (the header word) of a bytecoded method can refer directly to a CMMethod without special casing the garbage collector's method scanning code more than it already is.
 
  padToWord
  - a pad that may be necessary to make the homeOffset, startpc, padToWord triple as large as a CMMethod's objectHeader field
 
  stackCheckOffset
  - the distance from the header to the stack limit check in a frame building method or block, used to reenter execution in methods or blocks that have checked for events at what is effectively the first bytecode
 
  startpc
  - the bytecode pc of the start of a CMBlock's bytecode in the bytecode method!

Item was changed:
  ----- Method: CogBlockMethod class>>instVarNamesAndTypesForTranslationDo: (in category 'translation') -----
  instVarNamesAndTypesForTranslationDo: aBinaryBlock
  "enumerate aBinaryBlock with the names and C type strings for the
  inst vars to include in a CogMethod or CogBlockMethod struct."
 
  self allInstVarNames do:
  [:ivn|
  "Notionally objectHeader is in a union with homeOffset and startpc but
  we don't have any convenient support for unions.  So hack, hack, hack, hack."
  ((self == CogBlockMethod
  ifTrue: [#('objectHeader')]
  ifFalse: [#('homeOffset' 'startpc' 'padToWord')]) includes: ivn) ifFalse:
  [aBinaryBlock
  value: ivn
  value: (ivn caseOf: {
  ['objectHeader'] -> [self objectMemoryClass baseHeaderSize = 8
  ifTrue: [#sqLong]
  ifFalse: [#sqInt]].
  ['cmNumArgs'] -> [#(unsigned ' : 8')]. "SqueakV3 needs only 5 bits"
  ['cmType'] -> [#(unsigned ' : 3')].
  ['cmRefersToYoung'] -> [#(unsigned #Boolean ' : 1')].
+ ['cmHasMovableLiteral'] -> [#(unsigned #Boolean ' : 1')].
  ['cpicHasMNUCaseOrCMIsFullBlock']
  -> [#(unsigned #Boolean ' : 1')].
  ['cmUsageCount'] -> [#(unsigned ' : 3')]. "See CMMaxUsageCount in initialize"
  ['cmUsesPenultimateLit'] -> [#(unsigned #Boolean ' : 1')].
  ['cbUsesInstVars'] -> [#(unsigned #Boolean ' : 1')].
+ ['cmUnusedFlag'] -> [#(unsigned ' : 1')].
- ['cmUnusedFlags'] -> [#(unsigned ' : 2')].
  ['stackCheckOffset'] -> [#(unsigned ' : 12')]. "See MaxStackCheckOffset in initialize. a.k.a. cPICNumCases"
  ['blockSize'] -> [#'unsigned short']. "See MaxMethodSize in initialize"
  ['blockEntryOffset'] -> [#'unsigned short'].
  ['homeOffset'] -> [#'unsigned short'].
  ['startpc'] -> [#'unsigned short'].
  ['padToWord'] -> [#(#BaseHeaderSize 8 'unsigned int')].
  ['nextMethodOrIRCs'] -> [#usqInt]. "See NewspeakCogMethod"
  ['counters'] -> [#usqInt]} "See SistaCogMethod"
  otherwise:
  [#sqInt])]]!

Item was added:
+ ----- Method: CogBlockMethod>>cmHasMovableLiteral (in category 'accessing') -----
+ cmHasMovableLiteral
+ "Answer the value of cmHasMovableLiteral"
+
+ ^cmHasMovableLiteral!

Item was added:
+ ----- Method: CogBlockMethod>>cmHasMovableLiteral: (in category 'accessing') -----
+ cmHasMovableLiteral: anObject
+ "Set the value of cmHasMovableLiteral"
+
+ ^cmHasMovableLiteral := anObject!

Item was added:
+ ----- Method: CogBlockMethodSurrogate32>>cmHasMovableLiteral (in category 'accessing') -----
+ cmHasMovableLiteral
+ ^(((memory unsignedByteAt: address + 3 + baseHeaderSize) bitShift: -2) bitAnd: 16r1) ~= 0!

Item was added:
+ ----- Method: CogBlockMethodSurrogate32>>cmHasMovableLiteral: (in category 'accessing') -----
+ cmHasMovableLiteral: aValue
+ memory
+ unsignedByteAt: address + baseHeaderSize + 3
+ put: (((memory unsignedByteAt: address + baseHeaderSize + 3) bitAnd: 16rFB) + ((aValue ifTrue: [1] ifFalse: [0]) bitShift: 2)).
+ ^aValue!

Item was added:
+ ----- Method: CogBlockMethodSurrogate64>>cmHasMovableLiteral (in category 'accessing') -----
+ cmHasMovableLiteral
+ ^(((memory unsignedByteAt: address + 3 + baseHeaderSize) bitShift: -2) bitAnd: 16r1) ~= 0!

Item was added:
+ ----- Method: CogBlockMethodSurrogate64>>cmHasMovableLiteral: (in category 'accessing') -----
+ cmHasMovableLiteral: aValue
+ memory
+ unsignedByteAt: address + baseHeaderSize + 3
+ put: (((memory unsignedByteAt: address + baseHeaderSize + 3) bitAnd: 16rFB) + ((aValue ifTrue: [1] ifFalse: [0]) bitShift: 2)).
+ ^aValue!

Item was added:
+ ----- Method: CogMethodZone>>firstBogusYoungReferrer (in category 'young referers') -----
+ firstBogusYoungReferrer
+ "Answer that all entries in youngReferrers are in-use and have the cmRefersToYoung flag set.
+ Used to check that the youngreferrers pruning routines work correctly."
+ | pointer cogMethod |
+ <doNotGenerate>
+ (youngReferrers > limitAddress
+ or: [youngReferrers < mzFreeStart]) ifTrue:
+ [^#invalidListPointers].
+ pointer := youngReferrers.
+ [pointer < limitAddress] whileTrue:
+ [cogMethod := coInterpreter cCoerceSimple: (objectMemory longAt: pointer) to: #'CogMethod *'.
+ cogMethod cmType ~= CMFree ifTrue:
+ [cogMethod cmRefersToYoung ifFalse:
+ [^cogMethod].
+ (self occurrencesInYoungReferrers: cogMethod) ~= 1 ifTrue:
+ [^cogMethod]].
+ pointer := pointer + objectMemory wordSize].
+ cogMethod := cogit cCoerceSimple: baseAddress to: #'CogMethod *'.
+ [cogMethod < mzFreeStart] whileTrue:
+ [cogMethod cmType ~= CMFree ifTrue:
+ [(self occurrencesInYoungReferrers: cogMethod) ~= (cogMethod cmRefersToYoung ifTrue: [1] ifFalse: [0]) ifTrue:
+ [^cogMethod]].
+ cogMethod := self methodAfter: cogMethod]..
+ ^nil!

Item was changed:
  ----- Method: CogMethodZone>>followForwardedLiteralsInOpenPICList (in category 'jit - api') -----
  followForwardedLiteralsInOpenPICList
  <option: #SpurObjectMemory>
  | openPIC |
  <var: #openPIC type: #'CogMethod *'>
  openPIC := openPICList.
  [openPIC notNil] whileTrue:
  [cogit followForwardedLiteralsIn: openPIC.
+ openPIC := self cCoerceSimple: openPIC nextOpenPIC to: #'CogMethod *'].
+ self pruneYoungReferrers!
- openPIC := self cCoerceSimple: openPIC nextOpenPIC to: #'CogMethod *'.]!

Item was changed:
  ----- Method: CogMethodZone>>kosherYoungReferrers (in category 'young referers') -----
  kosherYoungReferrers
  "Answer that all entries in youngReferrers are in-use and have the cmRefersToYoung flag set.
  Used to check that the youngreferrers pruning routines work correctly."
+ <api>
  | pointer cogMethod |
  <var: #pointer type: #usqInt>
  <var: #cogMethod type: #'CogMethod *'>
  (youngReferrers > limitAddress
  or: [youngReferrers < mzFreeStart]) ifTrue:
  [^false].
  pointer := youngReferrers.
  [pointer < limitAddress] whileTrue:
  [cogMethod := coInterpreter cCoerceSimple: (objectMemory longAt: pointer) to: #'CogMethod *'.
+ cogMethod cmType ~= CMFree ifTrue:
+ [cogMethod cmRefersToYoung ifFalse:
+ [^false].
+ (self occurrencesInYoungReferrers: cogMethod) ~= 1 ifTrue:
+ [^false]].
- (cogMethod cmType ~= CMFree and: [cogMethod cmRefersToYoung]) ifFalse:
- [^false].
  pointer := pointer + objectMemory wordSize].
+ cogMethod := cogit cCoerceSimple: baseAddress to: #'CogMethod *'.
+ [cogMethod < mzFreeStart] whileTrue:
+ [cogMethod cmType ~= CMFree ifTrue:
+ [(self occurrencesInYoungReferrers: cogMethod) ~= (cogMethod cmRefersToYoung ifTrue: [1] ifFalse: [0]) ifTrue:
+ [^false]].
+ cogMethod := self methodAfter: cogMethod].
  ^true!

Item was changed:
  ----- Method: CogMethodZone>>pruneYoungReferrers (in category 'young referers') -----
  pruneYoungReferrers
  | source dest next |
+ <api>
  <var: #source type: #usqInt>
  <var: #dest type: #usqInt>
  <var: #next type: #usqInt>
  <inline: false>
 
  self assert: youngReferrers <= limitAddress.
  youngReferrers = limitAddress ifTrue:
  [^nil].
  dest := limitAddress.
  [next := dest - objectMemory wordSize.
  next >= youngReferrers
  and: [(coInterpreter cCoerceSimple: (objectMemory longAt: next) to: #'CogMethod *') cmRefersToYoung]] whileTrue:
  [dest := next].
  self assert: dest >= youngReferrers.
  source := dest - objectMemory wordSize.
  [source >= youngReferrers] whileTrue:
  [(coInterpreter cCoerceSimple: (objectMemory longAt: source) to: #'CogMethod *') cmRefersToYoung ifTrue:
  [self assert: source < (dest - objectMemory wordSize).
  objectMemory longAt: (dest := dest - objectMemory wordSize) put: (objectMemory longAt: source)].
  source := source - objectMemory wordSize].
  youngReferrers := dest.
  self assert: self kosherYoungReferrers!

Item was removed:
- ----- Method: CogObjectRepresentation>>isUnannotatableConstant: (in category 'compile abstract instructions') -----
- isUnannotatableConstant: simStackEntry
- <inline: true>
- <var: 'simStackEntry' type: #'CogSimStackEntry *'>
- ^simStackEntry type = SSConstant
-  and: [(objectMemory isImmediate: simStackEntry constant)
- or: [(self shouldAnnotateObjectReference: simStackEntry constant) not]]!

Item was changed:
  CogClass subclass: #Cogit
+ instanceVariableNames: 'coInterpreter objectMemory objectRepresentation processor threadManager methodZone methodZoneBase codeBase minValidCallAddress lastNInstructions simulatedAddresses simulatedTrampolines simulatedVariableGetters simulatedVariableSetters printRegisters printInstructions compilationTrace clickConfirm breakPC breakBlock singleStep guardPageSize traceFlags traceStores breakMethod methodObj enumeratingCogMethod methodHeader initialPC endPC methodOrBlockNumArgs inBlock needsFrame hasYoungReferent hasMovableLiteral primitiveIndex backEnd literalsManager postCompileHook methodLabel stackCheckLabel blockEntryLabel blockEntryNoContextSwitch blockNoContextSwitchOffset stackOverflowCall sendMiss missOffset entryPointMask checkedEntryAlignment uncheckedEntryAlignment cmEntryOffset entry cmNoCheckEntryOffset noCheckEntry fullBlockEntry cbEntryOffset fullBlockNoContextSwitchEntry cbNoSwitchEntryOffset picMNUAbort picInterpretAbort endCPICCase0 endCPICCase1 firstCPICCaseOffs
 et cPICCaseSize cPICEndSize closedPICSize openPICSize fixups abstractOpcodes generatorTable byte0 byte1 byte2 byte3 bytecodePC bytecodeSetOffset opcodeIndex numAbstractOpcodes blockStarts blockCount labelCounter cStackAlignment expectedSPAlignment expectedFPAlignment codeModified maxLitIndex ceMethodAbortTrampoline cePICAbortTrampoline ceCheckForInterruptTrampoline ceCPICMissTrampoline ceReturnToInterpreterTrampoline ceBaseFrameReturnTrampoline ceReapAndResetErrorCodeTrampoline ceSendMustBeBooleanAddTrueTrampoline ceSendMustBeBooleanAddFalseTrampoline ceCannotResumeTrampoline ceEnterCogCodePopReceiverReg ceCallCogCodePopReceiverReg ceCallCogCodePopReceiverAndClassRegs cePrimReturnEnterCogCode cePrimReturnEnterCogCodeProfiling ceNonLocalReturnTrampoline ceFetchContextInstVarTrampoline ceStoreContextInstVarTrampoline ceEnclosingObjectTrampoline ceFlushICache ceCheckFeaturesFunction ceTraceLinkedSendTrampoline ceTraceBlockActivationTrampoline ceTraceStoreTrampoline ceGetFP ceGetSP ceCa
 ptureCStackPointers ordinarySendTrampolines superSendTrampolines directedSuperSendTrampolines directedSuperBindingSendTrampolines dynamicSuperSendTrampolines outerSendTrampolines selfSendTrampolines firstSend lastSend realCEEnterCogCodePopReceiverReg realCECallCogCodePopReceiverReg realCECallCogCodePopReceiverAndClassRegs trampolineTableIndex trampolineAddresses objectReferencesInRuntime runtimeObjectRefIndex cFramePointerInUse debugPrimCallStackOffset ceTryLockVMOwner ceUnlockVMOwner extA extB numExtB tempOop numIRCs indexOfIRC theIRCs receiverTags implicitReceiverSendTrampolines cogMethodSurrogateClass cogBlockMethodSurrogateClass nsSendCacheSurrogateClass CStackPointer CFramePointer cPICPrototype cPICEndOfCodeOffset cPICEndOfCodeLabel ceMallocTrampoline ceFreeTrampoline ceFFICalloutTrampoline debugBytecodePointers debugOpcodeIndices disassemblingMethod cogConstituentIndex directedSendUsesBinding ceCheckLZCNTFunction'
- instanceVariableNames: 'coInterpreter objectMemory objectRepresentation processor threadManager methodZone methodZoneBase codeBase minValidCallAddress lastNInstructions simulatedAddresses simulatedTrampolines simulatedVariableGetters simulatedVariableSetters printRegisters printInstructions compilationTrace clickConfirm breakPC breakBlock singleStep guardPageSize traceFlags traceStores breakMethod methodObj enumeratingCogMethod methodHeader initialPC endPC methodOrBlockNumArgs inBlock needsFrame hasYoungReferent primitiveIndex backEnd literalsManager postCompileHook methodLabel stackCheckLabel blockEntryLabel blockEntryNoContextSwitch blockNoContextSwitchOffset stackOverflowCall sendMiss missOffset entryPointMask checkedEntryAlignment uncheckedEntryAlignment cmEntryOffset entry cmNoCheckEntryOffset noCheckEntry fullBlockEntry cbEntryOffset fullBlockNoContextSwitchEntry cbNoSwitchEntryOffset picMNUAbort picInterpretAbort endCPICCase0 endCPICCase1 firstCPICCaseOffset cPICCaseSize cP
 ICEndSize closedPICSize openPICSize fixups abstractOpcodes generatorTable byte0 byte1 byte2 byte3 bytecodePC bytecodeSetOffset opcodeIndex numAbstractOpcodes blockStarts blockCount labelCounter cStackAlignment expectedSPAlignment expectedFPAlignment codeModified maxLitIndex ceMethodAbortTrampoline cePICAbortTrampoline ceCheckForInterruptTrampoline ceCPICMissTrampoline ceReturnToInterpreterTrampoline ceBaseFrameReturnTrampoline ceReapAndResetErrorCodeTrampoline ceSendMustBeBooleanAddTrueTrampoline ceSendMustBeBooleanAddFalseTrampoline ceCannotResumeTrampoline ceEnterCogCodePopReceiverReg ceCallCogCodePopReceiverReg ceCallCogCodePopReceiverAndClassRegs cePrimReturnEnterCogCode cePrimReturnEnterCogCodeProfiling ceNonLocalReturnTrampoline ceFetchContextInstVarTrampoline ceStoreContextInstVarTrampoline ceEnclosingObjectTrampoline ceFlushICache ceCheckFeaturesFunction ceTraceLinkedSendTrampoline ceTraceBlockActivationTrampoline ceTraceStoreTrampoline ceGetFP ceGetSP ceCaptureCStackPointer
 s ordinarySendTrampolines superSendTrampolines directedSuperSendTrampolines directedSuperBindingSendTrampolines dynamicSuperSendTrampolines outerSendTrampolines selfSendTrampolines firstSend lastSend realCEEnterCogCodePopReceiverReg realCECallCogCodePopReceiverReg realCECallCogCodePopReceiverAndClassRegs trampolineTableIndex trampolineAddresses objectReferencesInRuntime runtimeObjectRefIndex cFramePointerInUse debugPrimCallStackOffset ceTryLockVMOwner ceUnlockVMOwner extA extB numExtB tempOop numIRCs indexOfIRC theIRCs receiverTags implicitReceiverSendTrampolines cogMethodSurrogateClass cogBlockMethodSurrogateClass nsSendCacheSurrogateClass CStackPointer CFramePointer cPICPrototype cPICEndOfCodeOffset cPICEndOfCodeLabel ceMallocTrampoline ceFreeTrampoline ceFFICalloutTrampoline debugBytecodePointers debugOpcodeIndices disassemblingMethod cogConstituentIndex directedSendUsesBinding ceCheckLZCNTFunction'
  classVariableNames: 'AltBlockCreationBytecodeSize AltFirstSpecialSelector AltNSSendIsPCAnnotated AltNumSpecialSelectors AnnotationConstantNames AnnotationShift AnnotationsWithBytecodePCs BlockCreationBytecodeSize Debug DisplacementMask DisplacementX2N EagerInstructionDecoration FirstAnnotation FirstSpecialSelector HasBytecodePC IsAbsPCReference IsAnnotationExtension IsDirectedSuperBindingSend IsDirectedSuperSend IsDisplacementX2N IsNSDynamicSuperSend IsNSImplicitReceiverSend IsNSSelfSend IsNSSendCall IsObjectReference IsRelativeCall IsSendCall IsSuperSend MapEnd MaxCPICCases MaxCompiledPrimitiveIndex MaxStackAllocSize MaxX2NDisplacement NSCClassTagIndex NSCEnclosingObjectIndex NSCNumArgsIndex NSCSelectorIndex NSCTargetIndex NSSendIsPCAnnotated NumObjRefsInRuntime NumOopsPerNSC NumSpecialSelectors NumTrampolines ProcessorClass RRRName'
  poolDictionaries: 'CogAbstractRegisters CogCompilationConstants CogMethodConstants CogRTLOpcodes VMBasicConstants VMBytecodeConstants VMObjectIndices VMStackFrameOffsets'
  category: 'VMMaker-JIT'!
  Cogit class
  instanceVariableNames: 'generatorTable primitiveTable'!
 
  !Cogit commentStamp: 'eem 2/25/2017 17:53' prior: 0!
  I am the code generator for the Cog VM.  My job is to produce machine code versions of methods for faster execution and to manage inline caches for faster send performance.  I can be tested in the current image using my class-side in-image compilation facilities.  e.g. try
 
  StackToRegisterMappingCogit genAndDis: (Integer >> #benchFib)
 
  I have concrete subclasses that implement different levels of optimization:
  SimpleStackBasedCogit is the simplest code generator.
 
  StackToRegisterMappingCogit is the current production code generator  It defers pushing operands
  to the stack until necessary and implements a register-based calling convention for low-arity sends.
 
  SistaCogit is an experimental code generator with support for counting
  conditional branches, intended to support adaptive optimization.
 
  RegisterAllocatingCogit is an experimental code generator with support for allocating temporary variables
  to registers. It is inended to serve as the superclass to SistaCogit once it is working.
 
  SistaRegisterAllocatingCogit and SistaCogitClone are temporary classes that allow testing a clone of
  SistaCogit that inherits from RegisterAllocatingCogit.  Once things work these will be merged and
  will replace SistaCogit.
 
  coInterpreter <CoInterpreterSimulator>
  the VM's interpreter with which I cooperate
  methodZoneManager <CogMethodZoneManager>
  the manager of the machine code zone
  objectRepresentation <CogObjectRepresentation>
  the object used to generate object accesses
  processor <BochsIA32Alien|?>
  the simulator that executes the IA32/x86 machine code I generate when simulating execution in Smalltalk
  simulatedTrampolines <Dictionary of Integer -> MessageSend>
  the dictionary mapping trap jump addresses to run-time routines used to warp from simulated machine code in to the Smalltalk run-time.
  simulatedVariableGetters <Dictionary of Integer -> MessageSend>
  the dictionary mapping trap read addresses to variables in run-time objects used to allow simulated machine code to read variables in the Smalltalk run-time.
  simulatedVariableSetters <Dictionary of Integer -> MessageSend>
  the dictionary mapping trap write addresses to variables in run-time objects used to allow simulated machine code to write variables in the Smalltalk run-time.
  printRegisters printInstructions clickConfirm <Boolean>
  flags controlling debug printing and code simulation
  breakPC <Integer>
  machine code pc breakpoint
  cFramePointer cStackPointer <Integer>
  the variables representing the C stack & frame pointers, which must change on FFI callback and return
  selectorOop <sqInt>
  the oop of the methodObj being compiled
  methodObj <sqInt>
  the bytecode method being compiled
  initialPC endPC <Integer>
  the start and end pcs of the methodObj being compiled
  methodOrBlockNumArgs <Integer>
  argument count of current method or block being compiled
  needsFrame <Boolean>
  whether methodObj or block needs a frame to execute
  primitiveIndex <Integer>
  primitive index of current method being compiled
  methodLabel <CogAbstractOpcode>
  label for the method header
  blockEntryLabel <CogAbstractOpcode>
  label for the start of the block dispatch code
  stackOverflowCall <CogAbstractOpcode>
  label for the call of ceStackOverflow in the method prolog
  sendMissCall <CogAbstractOpcode>
  label for the call of ceSICMiss in the method prolog
  entryOffset <Integer>
  offset of method entry code from start (header) of method
  entry <CogAbstractOpcode>
  label for the first instruction of the method entry code
  noCheckEntryOffset <Integer>
  offset of the start of a method proper (after the method entry code) from start (header) of method
  noCheckEntry <CogAbstractOpcode>
  label for the first instruction of start of a method proper
  fixups <Array of <AbstractOpcode Label | nil>>
  the labels for forward jumps that will be fixed up when reaching the relevant bytecode.  fixups has one element per byte in methodObj's bytecode; initialPC maps to fixups[0].
  abstractOpcodes <Array of <AbstractOpcode>>
  the code generated when compiling methodObj
  byte0 byte1 byte2 byte3 <Integer>
  individual bytes of current bytecode being compiled in methodObj
  bytecodePointer <Integer>
  bytecode pc (same as Smalltalk) of the current bytecode being compiled
  opcodeIndex <Integer>
  the index of the next free entry in abstractOpcodes (this code is translated into C where OrderedCollection et al do not exist)
  numAbstractOpcodes <Integer>
  the number of elements in abstractOpcocdes
  blockStarts <Array of <BlockStart>>
  the starts of blocks in the current method
  blockCount
  the index into blockStarts as they are being noted, and hence eventually the total number of blocks in the current method
  labelCounter <Integer>
  a nicety for numbering labels not needed in the production system but probably not expensive enough to worry about
  ceStackOverflowTrampoline <Integer>
  ceSend0ArgsTrampoline <Integer>
  ceSend1ArgsTrampoline <Integer>
  ceSend2ArgsTrampoline <Integer>
  ceSendNArgsTrampoline <Integer>
  ceSendSuper0ArgsTrampoline <Integer>
  ceSendSuper1ArgsTrampoline <Integer>
  ceSendSuper2ArgsTrampoline <Integer>
  ceSendSuperNArgsTrampoline <Integer>
  ceSICMissTrampoline <Integer>
  ceCPICMissTrampoline <Integer>
  ceStoreCheckTrampoline <Integer>
  ceReturnToInterpreterTrampoline <Integer>
  ceBaseFrameReturnTrampoline <Integer>
  ceSendMustBeBooleanTrampoline <Integer>
  ceClosureCopyTrampoline <Integer>
  the various trampolines (system-call-like jumps from machine code to the run-time).
  See Cogit>>generateTrampolines for the mapping from trampoline to run-time
  routine and then read the run-time routine for a funcitonal description.
  ceEnterCogCodePopReceiverReg <Integer>
  the enilopmart (jump from run-time to machine-code)
  methodZoneBase <Integer>
  !
  Cogit class
  instanceVariableNames: 'generatorTable primitiveTable'!

Item was changed:
  ----- Method: Cogit>>annotate:objRef: (in category 'method map') -----
  annotate: abstractInstruction objRef: anOop
  <var: #abstractInstruction type: #'AbstractInstruction *'>
  <returnTypeC: #'AbstractInstruction *'>
  (objectRepresentation shouldAnnotateObjectReference: anOop) ifTrue:
+ [self setHasMovableLiteral: true.
+ (objectMemory isYoungObject: anOop) ifTrue:
- [(objectMemory isYoungObject: anOop) ifTrue:
  [self setHasYoungReferent: true].
  abstractInstruction annotation: IsObjectReference].
  ^abstractInstruction!

Item was changed:
  ----- Method: Cogit>>compileCogFullBlockMethod: (in category 'compile abstract instructions') -----
  compileCogFullBlockMethod: numCopied
  <returnTypeC: #'CogMethod *'>
  <option: #SistaV1BytecodeSet>
  | numBytecodes numBlocks numCleanBlocks result |
+ self setHasMovableLiteral: false.
  self setHasYoungReferent: (objectMemory isYoungObject: methodObj).
  methodOrBlockNumArgs := coInterpreter argumentCountOf: methodObj.
  inBlock := InFullBlock.
  postCompileHook := nil.
  maxLitIndex := -1.
  self assert: (coInterpreter primitiveIndexOf: methodObj) = 0.
  initialPC := coInterpreter startPCOfMethod: methodObj.
  "initial estimate.  Actual endPC is determined in scanMethod."
  endPC := objectMemory numBytesOf: methodObj.
  numBytecodes := endPC - initialPC + 1.
  primitiveIndex := 0.
  self allocateOpcodes: (numBytecodes + 10) * self estimateOfAbstractOpcodesPerBytecodes
  bytecodes: numBytecodes
  ifFail: [^coInterpreter cCoerceSimple: MethodTooBig to: #'CogMethod *'].
  self flag: #TODO. "currently copiedValue access implies frameful method, this is suboptimal"
  (numBlocks := self scanMethod) < 0 ifTrue:
  [^coInterpreter cCoerceSimple: numBlocks to: #'CogMethod *'].
  self assert: numBlocks = 0. "blocks in full blocks are full blocks, they are not inlined."
  numCleanBlocks := self scanForCleanBlocks.
  self assert: numCleanBlocks = 0. "blocks in full blocks are full blocks, they are not inlined."
  self allocateBlockStarts: numBlocks + numCleanBlocks.
  blockCount := 0.
  numCleanBlocks > 0 ifTrue:
  [self addCleanBlockStarts].
  (self maybeAllocAndInitCounters
  and: [self maybeAllocAndInitIRCs]) ifFalse: "Inaccurate error code, but it'll do.  This will likely never fail."
  [^coInterpreter cCoerceSimple: InsufficientCodeSpace to: #'CogMethod *'].
 
  blockEntryLabel := nil.
  methodLabel dependent: nil.
  (result := self compileEntireFullBlockMethod: numCopied) < 0 ifTrue:
  [^coInterpreter cCoerceSimple: result to: #'CogMethod *'].
  ^self generateCogFullBlock!

Item was changed:
  ----- Method: Cogit>>compileCogMethod: (in category 'compile abstract instructions') -----
  compileCogMethod: selector
  <returnTypeC: #'CogMethod *'>
  | numBytecodes numBlocks numCleanBlocks result extra |
+ self setHasMovableLiteral: false.
  self setHasYoungReferent: ((objectMemory isYoungObject: methodObj)
   or: [objectMemory isYoung: selector]).
  methodOrBlockNumArgs := coInterpreter argumentCountOf: methodObj.
  inBlock := 0.
  postCompileHook := nil.
  maxLitIndex := -1.
  extra := ((primitiveIndex := coInterpreter primitiveIndexOf: methodObj) > 0
  and: [(coInterpreter isQuickPrimitiveIndex: primitiveIndex) not])
  ifTrue: [30]
  ifFalse: [10].
  initialPC := coInterpreter startPCOfMethod: methodObj.
  "initial estimate.  Actual endPC is determined in scanMethod."
  endPC := (coInterpreter isQuickPrimitiveIndex: primitiveIndex)
  ifTrue: [initialPC - 1]
  ifFalse: [objectMemory numBytesOf: methodObj].
  numBytecodes := endPC - initialPC + 1.
  self allocateOpcodes: (numBytecodes + extra) * self estimateOfAbstractOpcodesPerBytecodes
  bytecodes: numBytecodes
  ifFail: [^coInterpreter cCoerceSimple: MethodTooBig to: #'CogMethod *'].
  (numBlocks := self scanMethod) < 0 ifTrue:
  [^coInterpreter cCoerceSimple: numBlocks to: #'CogMethod *'].
  numCleanBlocks := self scanForCleanBlocks.
  self methodFoundInvalidPostScan ifTrue:
  [^coInterpreter cCoerceSimple: ShouldNotJIT to: #'CogMethod *'].
  self allocateBlockStarts: numBlocks + numCleanBlocks.
  blockCount := 0.
  numCleanBlocks > 0 ifTrue:
  [self addCleanBlockStarts].
  (self maybeAllocAndInitCounters
  and: [self maybeAllocAndInitIRCs]) ifFalse: "Inaccurate error code, but it'll do.  This will likely never fail."
  [^coInterpreter cCoerceSimple: InsufficientCodeSpace to: #'CogMethod *'].
 
  blockEntryLabel := nil.
  methodLabel dependent: nil.
  (result := self compileEntireMethod) < 0 ifTrue:
  [^coInterpreter cCoerceSimple: result to: #'CogMethod *'].
  ^self generateCogMethod: selector!

Item was changed:
  ----- Method: Cogit>>fillInCPICHeader:numArgs:numCases:hasMNUCase:selector: (in category 'generate machine code') -----
  fillInCPICHeader: pic numArgs: numArgs numCases: numCases hasMNUCase: hasMNUCase selector: selector
  <returnTypeC: #'CogMethod *'>
  <var: #pic type: #'CogMethod *'>
  <inline: true>
  self assert: (objectMemory isYoung: selector) not.
  pic cmType: CMClosedPIC.
  pic objectHeader: 0.
  pic blockSize: closedPICSize.
  pic methodObject: 0.
  pic methodHeader: 0.
  pic selector: selector.
  pic cmNumArgs: numArgs.
+ pic cmHasMovableLiteral: false.
  pic cmRefersToYoung: false.
  pic cmUsageCount: self initialClosedPICUsageCount.
  pic cpicHasMNUCase: hasMNUCase.
  pic cPICNumCases: numCases.
  pic blockEntryOffset: 0.
  self assert: pic cmType = CMClosedPIC.
  self assert: pic selector = selector.
  self assert: pic cmNumArgs = numArgs.
  self assert: pic cPICNumCases = numCases.
  self assert: (backEnd callTargetFromReturnAddress: pic asInteger + missOffset) = (self picAbortTrampolineFor: numArgs).
  self assert: closedPICSize = (methodZone roundUpLength: closedPICSize).
  processor flushICacheFrom: pic asUnsignedInteger to: pic asUnsignedInteger + closedPICSize.
  self maybeEnableSingleStep.
  ^pic!

Item was changed:
  ----- Method: Cogit>>fillInMethodHeader:size:selector: (in category 'generate machine code') -----
  fillInMethodHeader: method size: size selector: selector
  <returnTypeC: #'CogMethod *'>
  <var: #method type: #'CogMethod *'>
  | originalMethod rawHeader |
  <var: #originalMethod type: #'CogMethod *'>
  method cmType: CMMethod.
  method objectHeader: objectMemory nullHeaderForMachineCodeMethod.
  method blockSize: size.
  method methodObject: methodObj.
  rawHeader := coInterpreter rawHeaderOf: methodObj.
  "If the method has already been cogged (e.g. Newspeak accessors) then
  leave the original method attached to its cog method, but get the right header."
  (coInterpreter isCogMethodReference: rawHeader)
  ifTrue:
  [originalMethod := self cCoerceSimple: rawHeader to: #'CogMethod *'.
  self assert: originalMethod blockSize = size.
  self assert: methodHeader = originalMethod methodHeader.
  NewspeakVM ifTrue:
  [methodZone addToUnpairedMethodList: method]]
  ifFalse:
  [coInterpreter rawHeaderOf: methodObj put: method asInteger.
  NewspeakVM ifTrue:
  [method nextMethodOrIRCs: theIRCs]].
  method methodHeader: methodHeader.
  method selector: selector.
  method cmNumArgs: (coInterpreter argumentCountOfMethodHeader: methodHeader).
+ method cmHasMovableLiteral: hasMovableLiteral.
  (method cmRefersToYoung: hasYoungReferent) ifTrue:
  [methodZone addToYoungReferrers: method].
  method cmUsageCount: self initialMethodUsageCount.
  method cpicHasMNUCase: false.
  method cmUsesPenultimateLit: maxLitIndex >= ((objectMemory literalCountOfMethodHeader: methodHeader) - 2).
  method blockEntryOffset: (blockEntryLabel notNil
  ifTrue: [blockEntryLabel address - method asInteger]
  ifFalse: [0]).
  "This can be an error check since a large stackCheckOffset is caused by compiling
  a machine-code primitive, and hence depends on the Cogit, not the input method."
  needsFrame ifTrue:
  [stackCheckLabel address - method asInteger <= MaxStackCheckOffset ifFalse:
  [self error: 'too much code for stack check offset']].
  method stackCheckOffset: (needsFrame
  ifTrue: [stackCheckLabel address - method asInteger]
  ifFalse: [0]).
  self assert: (backEnd callTargetFromReturnAddress: method asInteger + missOffset)
  = (self methodAbortTrampolineFor: method cmNumArgs).
  self assert: size = (methodZone roundUpLength: size).
  processor flushICacheFrom: method asUnsignedInteger to: method asUnsignedInteger + size.
  self maybeEnableSingleStep.
  ^method!

Item was changed:
  ----- Method: Cogit>>fillInOPICHeader:numArgs:selector: (in category 'generate machine code') -----
  fillInOPICHeader: pic numArgs: numArgs selector: selector
  <returnTypeC: #'CogMethod *'>
  <var: #pic type: #'CogMethod *'>
  <inline: true>
  pic cmType: CMOpenPIC.
  pic objectHeader: 0.
  pic blockSize: openPICSize.
  "pic methodObject: 0.""This is also the nextOpenPIC link so don't initialize it"
  methodZone addToOpenPICList: pic.
  pic methodHeader: 0.
  pic selector: selector.
  pic cmNumArgs: numArgs.
+ pic cmHasMovableLiteral: (objectMemory isNonImmediate: selector).
  (pic cmRefersToYoung: (objectMemory isYoung: selector)) ifTrue:
  [methodZone addToYoungReferrers: pic].
  pic cmUsageCount: self initialOpenPICUsageCount.
  pic cpicHasMNUCase: false.
  pic cPICNumCases: 0.
  pic blockEntryOffset: 0.
  self assert: pic cmType = CMOpenPIC.
  self assert: pic selector = selector.
  self assert: pic cmNumArgs = numArgs.
  self assert: (backEnd callTargetFromReturnAddress: pic asInteger + missOffset) = (self picAbortTrampolineFor: numArgs).
  self assert: openPICSize = (methodZone roundUpLength: openPICSize).
  processor flushICacheFrom: pic asUnsignedInteger to: pic asUnsignedInteger + openPICSize.
  self maybeEnableSingleStep.
  ^pic!

Item was changed:
  ----- Method: Cogit>>followForwardedLiteralsIn: (in category 'garbage collection') -----
  followForwardedLiteralsIn: cogMethod
  <api>
  <option: #SpurObjectMemory>
  <var: #cogMethod type: #'CogMethod *'>
+ | hasYoungObj hasYoungObjPtr |
  self assert: (cogMethod cmType ~= CMMethod or: [(objectMemory isForwarded: cogMethod methodObject) not]).
+ hasYoungObj := false.
  (objectMemory shouldRemapOop: cogMethod selector) ifTrue:
  [cogMethod selector: (objectMemory remapObj: cogMethod selector).
  (objectMemory isYoung: cogMethod selector) ifTrue:
+ [hasYoungObj := true]].
+ hasYoungObjPtr := (self addressOf: hasYoungObj put: [:val| hasYoungObj := val]) asInteger.
- [methodZone ensureInYoungReferrers: cogMethod]].
  self mapFor: cogMethod
  performUntil: #remapIfObjectRef:pc:hasYoung:
+ arg: hasYoungObjPtr.
+ hasYoungObj
+ ifTrue: [methodZone ensureInYoungReferrers: cogMethod]
+ ifFalse: [cogMethod cmRefersToYoung: false]!
- arg: 0!

Item was added:
+ ----- Method: Cogit>>followMovableLiteralsAndUpdateYoungReferrers (in category 'garbage collection') -----
+ followMovableLiteralsAndUpdateYoungReferrers
+ "To avoid runtime checks on literal variable and literal accesses in == and ~~,
+ we follow literals in methods having movable literals in the postBecome action.
+ To avoid scanning every method, we annotate cogMethods with the
+ cmHasMovableLiteral flag."
+ <option: #SpurObjectMemory>
+ <api>
+ <returnTypeC: #void>
+ | cogMethod |
+ <var: #cogMethod type: #'CogMethod *'>
+ self assert: methodZone kosherYoungReferrers.
+ "methodZone firstBogusYoungReferrer"
+ "methodZone occurrencesInYoungReferrers: methodZone firstBogusYoungReferrer"
+ codeModified := false.
+ cogMethod := self cCoerceSimple: methodZoneBase to: #'CogMethod *'.
+ [cogMethod < methodZone limitZony] whileTrue:
+ [cogMethod cmType ~= CMFree ifTrue:
+ [cogMethod cmHasMovableLiteral ifTrue:
+ [self followForwardedLiteralsIn: cogMethod]].
+ cogMethod := methodZone methodAfter: cogMethod]..
+ methodZone pruneYoungReferrers.
+ codeModified ifTrue: "After updating oops in inline caches we need to flush the icache."
+ [processor flushICacheFrom: codeBase asUnsignedInteger to: methodZone limitZony asUnsignedInteger]!

Item was changed:
  ----- Method: Cogit>>genLoadInlineCacheWithSelector: (in category 'in-line cacheing') -----
  genLoadInlineCacheWithSelector: selectorIndex
  "The in-line cache for a send is implemented as a constant load into ClassReg.
  We always use a 32-bit load, even in 64-bits.
 
  In the initial (unlinked) state the in-line cache is notionally loaded with the selector.
  But since in 64-bits an arbitrary selector oop won't fit in a 32-bit constant load, we
  instead load the cache with the selector's index, either into the literal frame of the
  current method, or into the special selector array.  Negative values are 1-relative
  indices into the special selector array.
 
  When a send is linked, the load of the selector, or selector index, is overwritten with a
  load of the receiver's class, or class tag.  Hence, the 64-bit VM is currently constrained
  to use class indices as cache tags.  If out-of-line literals are used, distinct caches /must
  not/ share acche locations, for if they do, send cacheing will be confused by the sharing.
  Hence we use the MoveUniqueC32:R: instruction that will not share literal locations."
 
  | cacheValue |
  self assert: (selectorIndex < 0
  ifTrue: [selectorIndex negated between: 1 and: self numSpecialSelectors]
  ifFalse: [selectorIndex between: 0 and: (objectMemory literalCountOf: methodObj) - 1]).
 
  self inlineCacheTagsAreIndexes
  ifTrue:
  [cacheValue := selectorIndex]
  ifFalse:
  [| selector |
  selector := selectorIndex < 0
  ifTrue: [(coInterpreter specialSelector: -1 - selectorIndex)]
  ifFalse: [self getLiteral: selectorIndex].
  self assert: (objectMemory addressCouldBeOop: selector).
+ (objectMemory isNonImmediate: selector) ifTrue:
+ [self setHasMovableLiteral: true].
  (objectMemory isYoung: selector) ifTrue:
  [self setHasYoungReferent: true].
  cacheValue := selector].
 
  self MoveUniqueC32: cacheValue R: ClassReg!

Item was added:
+ ----- Method: Cogit>>kosherYoungReferrers (in category 'jit - api') -----
+ kosherYoungReferrers
+ <doNotGenerate>
+ methodZone kosherYoungReferrers!

Item was changed:
  ----- Method: Cogit>>printMethodHeader:on: (in category 'disassembly') -----
  printMethodHeader: cogMethod on: aStream
  <doNotGenerate>
  self cCode: ''
  inSmalltalk:
  [cogMethod isInteger ifTrue:
  [^self printMethodHeader: (self cogMethodOrBlockSurrogateAt: cogMethod) on: aStream]].
  aStream ensureCr.
  cogMethod asInteger printOn: aStream base: 16.
  cogMethod cmType = CMMethod ifTrue:
  [aStream crtab; nextPutAll: 'objhdr: '.
  cogMethod objectHeader printOn: aStream base: 16].
  cogMethod cmType = CMBlock ifTrue:
  [aStream crtab; nextPutAll: 'homemth: '.
  cogMethod cmHomeMethod asUnsignedInteger printOn: aStream base: 16.
  aStream
  nextPutAll: ' (offset '; print: cogMethod homeOffset; nextPut: $);
  crtab; nextPutAll: 'startpc: '; print: cogMethod startpc].
  aStream
  crtab; nextPutAll: 'nArgs: '; print: cogMethod cmNumArgs;
  tab;    nextPutAll: 'type: '; print: cogMethod cmType.
  (cogMethod cmType ~= 0 and: [cogMethod cmType ~= CMBlock]) ifTrue:
  [aStream crtab; nextPutAll: 'blksiz: '.
  cogMethod blockSize printOn: aStream base: 16.
  cogMethod cmType = CMMethod ifTrue:
  [aStream crtab; nextPutAll: 'method: '.
  cogMethod methodObject printOn: aStream base: 16.
  aStream crtab; nextPutAll: 'mthhdr: '.
  cogMethod methodHeader printOn: aStream base: 16].
  aStream crtab; nextPutAll: 'selctr: '.
  cogMethod selector printOn: aStream base: 16.
  (coInterpreter lookupAddress: cogMethod selector) ifNotNil:
  [:string| aStream nextPut: $=; nextPutAll: string].
  cogMethod selector = objectMemory nilObject ifTrue:
  [aStream space; nextPut: $(; nextPutAll: (coInterpreter stringOf: (coInterpreter maybeSelectorOfMethod: cogMethod methodObject)); nextPut: $)].
  cogMethod cmType = CMMethod ifTrue:
  [aStream crtab; nextPutAll: 'blkentry: '.
  cogMethod blockEntryOffset printOn: aStream base: 16.
  cogMethod blockEntryOffset ~= 0 ifTrue:
  [aStream nextPutAll: ' => '.
  cogMethod asInteger + cogMethod blockEntryOffset printOn: aStream base: 16]]].
  cogMethod cmType = CMClosedPIC
  ifTrue:
  [aStream crtab; nextPutAll: 'cPICNumCases: '.
  cogMethod cPICNumCases printOn: aStream base: 16.
  aStream tab; nextPutAll: 'cpicHasMNUCase: ';
  nextPutAll: (cogMethod cpicHasMNUCase ifTrue: ['yes'] ifFalse: ['no'])]
  ifFalse:
  [aStream crtab; nextPutAll: 'stackCheckOffset: '.
  cogMethod stackCheckOffset printOn: aStream base: 16.
  cogMethod stackCheckOffset > 0 ifTrue:
  [aStream nextPut: $/.
  cogMethod asInteger + cogMethod stackCheckOffset printOn: aStream base: 16].
  cogMethod cmType = CMBlock
  ifTrue:
  [aStream
  crtab;
  nextPutAll: 'cbUsesInstVars ';
  nextPutAll: (cogMethod cbUsesInstVars ifTrue: ['yes'] ifFalse: ['no'])]
  ifFalse:
  [aStream
  crtab;
  nextPutAll: 'cmRefersToYoung: ';
  nextPutAll: (cogMethod cmRefersToYoung ifTrue: ['yes'] ifFalse: ['no']);
  tab;
+ nextPutAll: 'cmHasMovableLiteral: ';
+ nextPutAll: (cogMethod cmHasMovableLiteral ifTrue: ['yes'] ifFalse: ['no']);
+ tab;
  nextPutAll: 'cmIsFullBlock: ';
  nextPutAll: (cogMethod cmIsFullBlock ifTrue: ['yes'] ifFalse: ['no'])].
  cogMethod cmType = CMMethod ifTrue:
  [([cogMethod nextMethodOrIRCs] on: MessageNotUnderstood do: [:ex| nil]) ifNotNil:
  [:nmoircs| aStream crtab; nextPutAll: 'nextMethodOrIRCs: '.
  nmoircs = 0 ifTrue: [aStream print: nmoircs] ifFalse: [coInterpreter printHex: nmoircs]].
  ([cogMethod counters] on: MessageNotUnderstood do: [:ex| nil]) ifNotNil:
  [:cntrs| aStream crtab; nextPutAll: 'counters: '.
  cntrs = 0 ifTrue: [aStream print: cntrs] ifFalse: [coInterpreter printHex: cntrs]]]].
  aStream cr; flush!

Item was added:
+ ----- Method: Cogit>>pruneYoungReferrers (in category 'jit - api') -----
+ pruneYoungReferrers
+ <doNotGenerate>
+ methodZone pruneYoungReferrers!

Item was added:
+ ----- Method: Cogit>>setHasMovableLiteral: (in category 'accessing') -----
+ setHasMovableLiteral: boolean
+ "Written this way to allow break-pointing in the simulator."
+ <cmacro: '(b) (hasMovableLiteral = (b))'>
+ hasMovableLiteral := boolean!

Item was changed:
  ----- Method: Cogit>>setHasYoungReferent: (in category 'accessing') -----
  setHasYoungReferent: boolean
+ "Written this way to allow break-pointing in the simulator."
- "Written this way to allow reak-pointing in the simulator."
  <cmacro: '(b) (hasYoungReferent = (b))'>
  "boolean ifTrue:
  [self halt]."
  "(hasYoungReferent == false and: [boolean == true]) ifTrue:
  [self halt]."
  hasYoungReferent := boolean!

Item was added:
+ ----- Method: MethodReference>>sourceStringOrNil (in category '*VMMaker-extensions') -----
+ sourceStringOrNil
+ ^(self actualClass sourceCodeAt: self methodSymbol ifAbsent: [^nil]) asString!

Item was changed:
  ----- Method: RegisterAllocatingCogit>>genForwardersInlinedIdenticalOrNotIf: (in category 'bytecode generators') -----
  genForwardersInlinedIdenticalOrNotIf: orNot
  | nextPC branchDescriptor unforwardRcvr argReg targetPC
   unforwardArg  rcvrReg postBranchPC retry fixup
   comparison
   needMergeToTarget needMergeToContinue |
  <var: #branchDescriptor type: #'BytecodeDescriptor *'>
  <var: #toContinueLabel type: #'AbstractInstruction *'>
  <var: #toTargetLabel type: #'AbstractInstruction *'>
  <var: #comparison type: #'AbstractInstruction *'>
  <var: #retry type: #'AbstractInstruction *'>
 
  self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target |
  branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetPC := target ].
 
  "If an operand is an annotable constant, it may be forwarded, so we need to store it into a
  register so the forwarder check can jump back to the comparison after unforwarding the constant.
  However, if one of the operand is an unnanotable constant, does not allocate a register for it
  (machine code will use operations on constants) and does not generate forwarder checks."
+ unforwardRcvr := (self ssValue: 1) type ~= SSConstant.
+ unforwardArg := self ssTop type ~= SSConstant.
- unforwardRcvr := (objectRepresentation isUnannotatableConstant: (self ssValue: 1)) not.
- unforwardArg := (objectRepresentation isUnannotatableConstant: self ssTop) not.
 
  self
  allocateEqualsEqualsRegistersArgNeedsReg: unforwardArg
  rcvrNeedsReg: unforwardRcvr
  into: [ :rcvr :arg | rcvrReg:= rcvr. argReg := arg ].
 
  "If not followed by a branch, resolve to true or false."
  (branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:
  [^self
  genIdenticalNoBranchArgIsConstant: unforwardArg not
  rcvrIsConstant: unforwardRcvr not
  argReg: argReg
  rcvrReg: rcvrReg
  orNotIf: orNot].
 
  self assert: (unforwardArg or: [unforwardRcvr]).
  self ssPop: 2. "If we had moveAllButTop: 2 volatileSimStackEntriesToRegistersPreserving: we could avoid the extra ssPop:s"
  self moveVolatileSimStackEntriesToRegistersPreserving:
  (self allocatedRegisters bitOr: (argReg = NoReg
  ifTrue: [self registerMaskFor: rcvrReg]
  ifFalse:
  [rcvrReg = NoReg
  ifTrue: [self registerMaskFor: argReg]
  ifFalse: [self registerMaskFor: rcvrReg and: argReg]])).
  retry := self Label.
  self ssPop: -2.
  self genCmpArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg.
  self ssPop: 2.
 
  (self fixupAt: nextPC) notAFixup "The next instruction is dead.  we can skip it."
  ifTrue:  [deadCode := true]
  ifFalse: [self deny: deadCode]. "push dummy value below"
 
  "self printSimStack; printSimStack: (self fixupAt: postBranchPC) mergeSimStack"
  "If there are merges to be performed on the forward branches we have to execute
  the merge code only along the path requiring that merge, and exactly once."
  needMergeToTarget := self mergeRequiredForJumpTo: targetPC.
  needMergeToContinue := self mergeRequiredForJumpTo: postBranchPC.
  orNot == branchDescriptor isBranchTrue
  ifFalse: "a == b ifTrue: ... or a ~~ b ifFalse: ... jump on equal to target pc"
  [fixup := needMergeToContinue
  ifTrue: [0] "jumps will fall-through to to-continue merge code"
  ifFalse: [self ensureFixupAt: postBranchPC].
  comparison := self JumpZero: (needMergeToTarget
  ifTrue: [0] "comparison will be fixed up to to-target merge code"
  ifFalse: [self ensureFixupAt: targetPC])]
  ifTrue: "a == b ifFalse: ... or a ~~ b ifTrue: ... jump on equal to post-branch pc"
  [fixup := needMergeToTarget
  ifTrue: [0] "jumps will fall-through to to-target merge code"
  ifFalse: [self ensureFixupAt: targetPC].
  comparison := self JumpZero: (needMergeToContinue
  ifTrue: [0] "comparison will be fixed up to to-continue merge code"
  ifFalse: [self ensureFixupAt: postBranchPC])].
 
  "The forwarders check(s) need(s) to jump back to the comparison (retry) if a forwarder is found,
  else jump forward either to the next forwarder check or to the postBranch or branch target (fixup).
  But if there is merge code along a path, the jump must be to the merge code."
  (unforwardArg and: [unforwardRcvr]) ifTrue:
  [objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: retry].
  objectRepresentation
  genEnsureOopInRegNotForwarded: (unforwardRcvr ifTrue: [rcvrReg] ifFalse: [argReg])
  scratchReg: TempReg
  ifForwarder: retry
  ifNotForwarder: fixup.
  "If fixup is zero then the ifNotForwarder path falls through to a Label which is interpreted
  as either to-continue or to-target, depending on orNot == branchDescriptor isBranchTrue."
  orNot == branchDescriptor isBranchTrue
  ifFalse: "a == b ifTrue: ... or a ~~ b ifFalse: ... jump on equal to target pc"
  [needMergeToContinue ifTrue: "fall-through to to-continue merge code"
  [self Jump: (self ensureFixupAt: postBranchPC)].
  needMergeToTarget ifTrue: "fixup comparison to to-target merge code"
  [comparison jmpTarget: self Label.
  self Jump: (self ensureFixupAt: targetPC)]]
  ifTrue: "a == b ifFalse: ... or a ~~ b ifTrue: ... jump on equal to post-branch pc"
  [needMergeToTarget ifTrue: "fall-through to to-target merge code"
  [self Jump: (self ensureFixupAt: targetPC)].
  needMergeToContinue ifTrue: "fixup comparison to to-continue merge code"
  [comparison jmpTarget: self Label.
  self Jump: (self ensureFixupAt: postBranchPC)]].
 
  deadCode ifFalse: "duplicate the merge fixup's top of stack so as to avoid a false confict."
  [self ssPushDesc: ((self fixupAt: nextPC) mergeSimStack at: simStackPtr + 1)].
  ^0!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>genNSSend:numArgs:depth:sendTable: (in category 'bytecode generators') -----
  genNSSend: selectorIndex numArgs: numArgs depth: depth sendTable: sendTable
  <var: #sendTable type: #'sqInt *'>
  | selector nsSendCache |
  self assert: (selectorIndex between: 0 and: (objectMemory literalCountOf: methodObj) - 1).
  selector := self getLiteral: selectorIndex.
  self assert: (objectMemory addressCouldBeOop: selector).
+ (objectMemory isNonImmediate: selector) ifTrue:
+ [self setHasMovableLiteral: true].
  (objectMemory isYoung: selector) ifTrue:
  [self setHasYoungReferent: true].
 
  nsSendCache := theIRCs + (NumOopsPerNSC * objectMemory bytesPerOop * indexOfIRC).
  indexOfIRC := indexOfIRC + 1.
  self assert: (objectMemory isInOldSpace: nsSendCache).
  self initializeNSSendCache: nsSendCache selector: selector numArgs: numArgs depth: depth.
 
  "This leaves the method receiver on the stack, which might not be the implicit receiver.
  But the lookup trampoline will establish the on-stack receiver once it locates it."
  self marshallAbsentReceiverSendArguments: numArgs.
 
  "Load the cache last so it is a fixed distance from the call."
  self MoveUniqueCw: nsSendCache R: SendNumArgsReg.
  self CallNewspeakSend: (sendTable at: (numArgs min: NumSendTrampolines - 1)).
 
  self PushR: ReceiverResultReg.
  ^0!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>genPushLiteralVariable: (in category 'bytecode generator support') -----
  genPushLiteralVariable: literalIndex
  <inline: false>
  | association |
  association := self getLiteral: literalIndex.
  "If followed by a directed super send bytecode, avoid generating any code yet.
  The association will be passed to the directed send trampoline in a register
  and fully dereferenced only when first linked.  It will be ignored in later sends."
  BytecodeSetHasDirectedSuperSend ifTrue:
  [self deny: directedSendUsesBinding.
  self nextDescriptorExtensionsAndNextPCInto:
  [:descriptor :exta :extb :followingPC|
  (self isDirectedSuper: descriptor extA: exta extB: extb) ifTrue:
  [tempOop := association.
  directedSendUsesBinding := true.
  ^0]]].
  "N.B. Do _not_ use ReceiverResultReg to avoid overwriting receiver in assignment in frameless methods."
  self genMoveConstant: association R: ClassReg.
  objectRepresentation
- genEnsureObjInRegNotForwarded: ClassReg
- scratchReg: TempReg.
- objectRepresentation
  genLoadSlot: ValueIndex
  sourceReg: ClassReg
  destReg: TempReg.
  self PushR: TempReg.
  ^0!

Item was changed:
  ----- Method: SimpleStackBasedCogit>>genStorePop:LiteralVariable: (in category 'bytecode generator support') -----
  genStorePop: popBoolean LiteralVariable: litVarIndex
  <inline: false>
  | association |
  "The only reason we assert needsFrame here is that in a frameless method
  ReceiverResultReg must and does contain only self, but the ceStoreCheck
  trampoline expects the target of the store to be in ReceiverResultReg.  So
  in a frameless method we would have a conflict between the receiver and
  the literal store, unless we we smart enough to realise that ReceiverResultReg
  was unused after the literal variable store, unlikely given that methods
  return self by default."
  self assert: needsFrame.
  association := self getLiteral: litVarIndex.
  self genMoveConstant: association R: ReceiverResultReg.
- objectRepresentation
- genEnsureObjInRegNotForwarded: ReceiverResultReg
- scratchReg: TempReg.
  popBoolean
  ifTrue: [self PopR: ClassReg]
  ifFalse: [self MoveMw: 0 r: SPReg R: ClassReg].
  self
  genStoreSourceReg: ClassReg
  slotIndex: ValueIndex
  destReg: ReceiverResultReg
  scratchReg: TempReg
  inFrame: needsFrame.
  ^0!

Item was changed:
  ----- Method: SistaCogit>>genForwardersInlinedIdenticalOrNotIf: (in category 'bytecode generators') -----
  genForwardersInlinedIdenticalOrNotIf: orNot
  "Override to count inlined branches if followed by a conditional branch.
  We borrow the following conditional branch's counter and when about to
  inline the comparison we decrement the counter (without writing it back)
  and if it trips simply abort the inlining, falling back to the normal send which
  will then continue to the conditional branch which will trip and enter the abort."
  | nextPC postBranchPC targetBytecodePC branchDescriptor counterReg fixup jumpEqual jumpNotEqual
   counterAddress countTripped unforwardArg unforwardRcvr argReg rcvrReg regMask |
  <var: #fixup type: #'BytecodeFixup *'>
  <var: #countTripped type: #'AbstractInstruction *'>
  <var: #label type: #'AbstractInstruction *'>
  <var: #branchDescriptor type: #'BytecodeDescriptor *'>
  <var: #jumpEqual type: #'AbstractInstruction *'>
  <var: #jumpNotEqual type: #'AbstractInstruction *'>
 
  ((coInterpreter isOptimizedMethod: methodObj) or: [needsFrame not]) ifTrue:
  [^super genForwardersInlinedIdenticalOrNotIf: orNot].
 
  regMask := 0.
 
  self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target |
  branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetBytecodePC := target ].
 
+ unforwardRcvr := (self ssValue: 1) type ~= SSConstant.
+ unforwardArg := self ssTop type ~= SSConstant.
- unforwardRcvr := (objectRepresentation isUnannotatableConstant: (self ssValue: 1)) not.
- unforwardArg := (objectRepresentation isUnannotatableConstant: self ssTop) not.
 
  "If an operand is an annotable constant, it may be forwarded, so we need to store it into a
  register so the forwarder check can jump back to the comparison after unforwarding the constant.
  However, if one of the operand is an unnanotable constant, does not allocate a register for it
  (machine code will use operations on constants)."
  self
  allocateEqualsEqualsRegistersArgNeedsReg: unforwardArg
  rcvrNeedsReg: unforwardRcvr
  into: [ :rcvr :arg | rcvrReg:= rcvr. argReg := arg ].
 
  "Only interested in inlining if followed by a conditional branch."
  (branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:
  [^ self
  genIdenticalNoBranchArgIsConstant: unforwardArg not
  rcvrIsConstant: unforwardRcvr not
  argReg: argReg
  rcvrReg: rcvrReg
  orNotIf: orNot].
 
  "If branching the stack must be flushed for the merge"
  self ssFlushTo: simStackPtr - 2.
 
  unforwardArg ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg ].
  unforwardRcvr ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg ].
 
  regMask := argReg = NoReg
  ifTrue: [self registerMaskFor: rcvrReg]
  ifFalse:
  [rcvrReg = NoReg
  ifTrue: [self registerMaskFor: argReg]
  ifFalse: [self registerMaskFor: rcvrReg and: argReg]].
  counterReg := self allocateRegNotConflictingWith: regMask.
  self
  genExecutionCountLogicInto: [ :cAddress :countTripBranch |
  counterAddress := cAddress.
  countTripped := countTripBranch ]
  counterReg: counterReg.
 
  self assert: (unforwardArg or: [ unforwardRcvr ]).
  self genCmpArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg.
  self ssPop: 2.
 
  orNot == branchDescriptor isBranchTrue "orNot is true for ~~"
  ifFalse:
  [ fixup := (self ensureNonMergeFixupAt: postBranchPC) asUnsignedInteger.
  self JumpZero:  (self ensureNonMergeFixupAt: targetBytecodePC) asUnsignedInteger ]
  ifTrue:
  [ fixup := (self ensureNonMergeFixupAt: targetBytecodePC) asUnsignedInteger.
  self JumpZero: (self ensureNonMergeFixupAt: postBranchPC) asUnsignedInteger ].
 
  self genFallsThroughCountLogicCounterReg: counterReg counterAddress: counterAddress.
  self Jump: fixup.
 
  countTripped jmpTarget: self Label.
 
  "inlined version of #== ignoring the branchDescriptor if the counter trips to have normal state for the optimizer"
  self ssPop: -2.
  self genCmpArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg.
  self ssPop: 2.
 
  "This code necessarily directly falls through the jumpIf: code which pops the top of the stack into TempReg.
  We therefore directly assign the result to TempReg to save one move instruction"
  jumpEqual := orNot ifFalse: [self JumpZero: 0] ifTrue: [self JumpNonZero: 0].
  self genMoveFalseR: TempReg.
  jumpNotEqual := self Jump: 0.
  jumpEqual jmpTarget: (self genMoveTrueR: TempReg).
  jumpNotEqual jmpTarget: self Label.
  self ssPushRegister: TempReg.
 
  (self fixupAt: nextPC) notAFixup ifTrue: [ branchReachedOnlyForCounterTrip := true ].
 
  ^ 0!

Item was changed:
  ----- Method: SistaCogit>>genUnaryUnforwardNonImmediateInlinePrimitive (in category 'inline primitive unary generators') -----
  genUnaryUnforwardNonImmediateInlinePrimitive
  "1039 unforwardNonImmediate
  non immediate => Not a forwarder"
+ "unforwardNonImmediate was used exclusively for literal variables,
+ which are now never forwarders because we scan the zone in postBecome,
+ so this is effectively a noop."
  | topReg |
  topReg := self allocateRegForStackEntryAt: 0 notConflictingWith: 0.
  self ssTop popToReg: topReg.
+ "objectRepresentation genEnsureObjInRegNotForwarded: topReg scratchReg: TempReg."
- objectRepresentation genEnsureObjInRegNotForwarded: topReg scratchReg: TempReg.
  self ssPop: 1.
  ^self ssPushRegister: topReg!

Item was changed:
  ----- Method: SistaCogitClone>>genUnaryUnforwardNonImmediateInlinePrimitive (in category 'inline primitive unary generators') -----
  genUnaryUnforwardNonImmediateInlinePrimitive
  "1039 unforwardNonImmediate
  non immediate => Not a forwarder"
+ "unforwardNonImmediate was used exclusively for literal variables,
+ which are now never forwarders because we scan the zone in postBecome,
+ so this is effectively a noop."
  | topReg |
  topReg := self allocateRegForStackEntryAt: 0 notConflictingWith: 0.
  self ssTop popToReg: topReg.
+ "objectRepresentation genEnsureObjInRegNotForwarded: topReg scratchReg: TempReg."
- objectRepresentation genEnsureObjInRegNotForwarded: topReg scratchReg: TempReg.
  self ssPop: 1.
  ^self ssPushRegister: topReg!

Item was changed:
  ----- Method: SistaRegisterAllocatingCogit>>genForwardersInlinedIdenticalOrNotIf: (in category 'bytecode generators') -----
  genForwardersInlinedIdenticalOrNotIf: orNot
  "Override to count inlined branches if followed by a conditional branch.
  We borrow the following conditional branch's counter and when about to
  inline the comparison we decrement the counter (without writing it back)
  and if it trips simply abort the inlining, falling back to the normal send which
  will then continue to the conditional branch which will trip and enter the abort."
  | nextPC postBranchPC targetBytecodePC branchDescriptor counterReg fixup jumpEqual jumpNotEqual
   counterAddress countTripped unforwardArg unforwardRcvr argReg rcvrReg regMask |
  <var: #fixup type: #'BytecodeFixup *'>
  <var: #countTripped type: #'AbstractInstruction *'>
  <var: #label type: #'AbstractInstruction *'>
  <var: #branchDescriptor type: #'BytecodeDescriptor *'>
  <var: #jumpEqual type: #'AbstractInstruction *'>
  <var: #jumpNotEqual type: #'AbstractInstruction *'>
 
  ((coInterpreter isOptimizedMethod: methodObj) or: [needsFrame not]) ifTrue:
  [^super genForwardersInlinedIdenticalOrNotIf: orNot].
 
  regMask := 0.
 
  self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target |
  branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetBytecodePC := target ].
 
+ unforwardRcvr := (self ssValue: 1) type ~= SSConstant.
+ unforwardArg := self ssTop type ~= SSConstant.
- unforwardRcvr := (objectRepresentation isUnannotatableConstant: (self ssValue: 1)) not.
- unforwardArg := (objectRepresentation isUnannotatableConstant: self ssTop) not.
 
  "If an operand is an annotable constant, it may be forwarded, so we need to store it into a
  register so the forwarder check can jump back to the comparison after unforwarding the constant.
  However, if one of the operand is an unnanotable constant, does not allocate a register for it
  (machine code will use operations on constants)."
  rcvrReg:= argReg := NoReg.
  self
  allocateEqualsEqualsRegistersArgNeedsReg: unforwardArg
  rcvrNeedsReg: unforwardRcvr
  into: [ :rcvr :arg | rcvrReg:= rcvr. argReg := arg ].
 
  argReg ~= NoReg ifTrue: [ regMask := self registerMaskFor: argReg ].
  rcvrReg ~= NoReg ifTrue: [ regMask := regMask bitOr: (self registerMaskFor: rcvrReg) ].
 
  "Only interested in inlining if followed by a conditional branch."
  (branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:
  [^ self
  genIdenticalNoBranchArgIsConstant: unforwardArg not
  rcvrIsConstant: unforwardRcvr not
  argReg: argReg
  rcvrReg: rcvrReg
  orNotIf: orNot].
 
  unforwardArg ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg ].
  unforwardRcvr ifTrue: [ objectRepresentation genEnsureOopInRegNotForwarded: rcvrReg scratchReg: TempReg ].
 
  counterReg := self allocateRegNotConflictingWith: regMask.
  self
  genExecutionCountLogicInto: [ :cAddress :countTripBranch |
  counterAddress := cAddress.
  countTripped := countTripBranch ]
  counterReg: counterReg.
 
  self assert: (unforwardArg or: [ unforwardRcvr ]).
  self genCmpArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg.
  self ssPop: 2.
 
  orNot == branchDescriptor isBranchTrue "orNot is true for ~~"
  ifFalse:
  [ fixup := (self ensureNonMergeFixupAt: postBranchPC) asUnsignedInteger.
  self JumpZero:  (self ensureNonMergeFixupAt: targetBytecodePC) asUnsignedInteger ]
  ifTrue:
  [ fixup := (self ensureNonMergeFixupAt: targetBytecodePC) asUnsignedInteger.
  self JumpZero: (self ensureNonMergeFixupAt: postBranchPC) asUnsignedInteger ].
 
  self genFallsThroughCountLogicCounterReg: counterReg counterAddress: counterAddress.
  self Jump: fixup.
 
  countTripped jmpTarget: self Label.
 
  "inlined version of #== ignoring the branchDescriptor if the counter trips to have normal state for the optimizer"
  self ssPop: -2.
  self genCmpArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg.
  self ssPop: 2.
 
  "This code necessarily directly falls through the jumpIf: code which pops the top of the stack into TempReg.
  We therefore directly assign the result to TempReg to save one move instruction"
  jumpEqual := orNot ifFalse: [self JumpZero: 0] ifTrue: [self JumpNonZero: 0].
  self genMoveFalseR: TempReg.
  jumpNotEqual := self Jump: 0.
  jumpEqual jmpTarget: (self genMoveTrueR: TempReg).
  jumpNotEqual jmpTarget: self Label.
  self ssPushRegister: TempReg.
 
  (self fixupAt: nextPC) notAFixup ifTrue: [ branchReachedOnlyForCounterTrip := true ].
 
  ^ 0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genForwardersInlinedIdenticalOrNotIf: (in category 'bytecode generators') -----
  genForwardersInlinedIdenticalOrNotIf: orNot
  | nextPC branchDescriptor unforwardRcvr argReg targetBytecodePC
   unforwardArg  rcvrReg postBranchPC label fixup |
  <var: #branchDescriptor type: #'BytecodeDescriptor *'>
  <var: #label type: #'AbstractInstruction *'>
 
  self extractMaybeBranchDescriptorInto: [ :descr :next :postBranch :target |
  branchDescriptor := descr. nextPC := next. postBranchPC := postBranch. targetBytecodePC := target ].
 
  "If an operand is an annotable constant, it may be forwarded, so we need to store it into a
  register so the forwarder check can jump back to the comparison after unforwarding the constant.
  However, if one of the operand is an unnanotable constant, does not allocate a register for it
  (machine code will use operations on constants) and does not generate forwarder checks."
+ unforwardRcvr := (self ssValue: 1) type ~= SSConstant.
+ unforwardArg := self ssTop type ~= SSConstant.
- unforwardRcvr := (objectRepresentation isUnannotatableConstant: (self ssValue: 1)) not.
- unforwardArg := (objectRepresentation isUnannotatableConstant: self ssTop) not.
 
  self
  allocateEqualsEqualsRegistersArgNeedsReg: unforwardArg
  rcvrNeedsReg: unforwardRcvr
  into: [ :rcvr :arg | rcvrReg:= rcvr. argReg := arg ].
 
  "If not followed by a branch, resolve to true or false."
  (branchDescriptor isBranchTrue or: [branchDescriptor isBranchFalse]) ifFalse:
  [^ self
  genIdenticalNoBranchArgIsConstant: unforwardArg not
  rcvrIsConstant: unforwardRcvr not
  argReg: argReg
  rcvrReg: rcvrReg
  orNotIf: orNot].
 
  "If branching the stack must be flushed for the merge"
  self ssFlushTo: simStackPtr - 2.
 
  label := self Label.
  self genCmpArgIsConstant: unforwardArg not rcvrIsConstant: unforwardRcvr not argReg: argReg rcvrReg: rcvrReg.
  self ssPop: 2.
 
  "Since there is a following conditional jump bytecode (unless there is deadCode),
  define non-merge fixups and leave the cond bytecode to set the mergeness."
  (self fixupAt: nextPC) notAFixup
  ifTrue: "The next instruction is dead.  we can skip it."
  [deadCode := true.
  self ensureFixupAt: targetBytecodePC.
  self ensureFixupAt: postBranchPC]
  ifFalse:
  [self deny: deadCode]. "push dummy value below"
 
  self assert: (unforwardArg or: [unforwardRcvr]).
  orNot == branchDescriptor isBranchTrue "orNot is true for ~~"
  ifFalse: "a == b ifTrue: ... or a ~~ b ifFalse: ... jump on equal to target pc"
  [fixup := self ensureNonMergeFixupAt: postBranchPC.
  self JumpZero:  (self ensureNonMergeFixupAt: targetBytecodePC)]
  ifTrue: "a == b ifFalse: ... or a ~~ b ifTrue: ... jump on equal to post-branch pc"
  [fixup := self ensureNonMergeFixupAt: targetBytecodePC.
  self JumpZero: (self ensureNonMergeFixupAt: postBranchPC)].
 
  "The forwarders checks need to jump back to the comparison (label) if a forwarder is found, else
  jump forward either to the next forwarder check or to the postBranch or branch target (fixup)."
  (unforwardArg and: [unforwardRcvr]) ifTrue:
  [objectRepresentation genEnsureOopInRegNotForwarded: argReg scratchReg: TempReg jumpBackTo: label].
  objectRepresentation
  genEnsureOopInRegNotForwarded: (unforwardRcvr ifTrue: [rcvrReg] ifFalse: [argReg])
  scratchReg: TempReg
  ifForwarder: label
  ifNotForwarder: fixup.
 
  "Not reached, execution flow has jumped to fixup"
  deadCode ifFalse:
  [self ssPushConstant: objectMemory trueObject]. "dummy value"
  ^0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genInlinedIdenticalOrNotIf: (in category 'bytecode generators') -----
  genInlinedIdenticalOrNotIf: orNot
  "Decompose code generation for #== into a common constant-folding version,
  followed by a double dispatch throguh the objectRepresentation to a version
  that doesn't deal with forwarders and a version that does."
  | primDescriptor result |
  <var: #primDescriptor type: #'BytecodeDescriptor *'>
  primDescriptor := self generatorAt: byte0.
 
+ ((self ssTop type == SSConstant)
+ and: [(self ssValue: 1) type == SSConstant]) ifTrue:
- ((objectRepresentation isUnannotatableConstant: self ssTop)
- and: [ objectRepresentation isUnannotatableConstant: (self ssValue: 1) ]) ifTrue:
  [self assert: primDescriptor isMapped not.
  result := (orNot
  ifFalse: [self ssTop constant = (self ssValue: 1) constant]
  ifTrue: [self ssTop constant ~= (self ssValue: 1) constant])
  ifTrue: [objectMemory trueObject]
  ifFalse: [objectMemory falseObject].
  self ssPop: 2.
  ^self ssPushConstant: result].
 
  ^objectRepresentation genInlinedIdenticalOrNotIfGuts: orNot!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genLoadLiteralVariable:in: (in category 'bytecode generator support') -----
  genLoadLiteralVariable: litVarIndex in: destReg
  <inline: true>
  | association |
  association := self getLiteral: litVarIndex.
  destReg = ReceiverResultReg ifTrue: [self voidReceiverResultRegContainsSelf].
  self ssAllocateRequiredReg: destReg.
+ self genMoveConstant: association R: destReg.!
- self genMoveConstant: association R: destReg.
- objectRepresentation genEnsureObjInRegNotForwarded: destReg scratchReg: TempReg.!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genNSSend:numArgs:depth:sendTable: (in category 'bytecode generators') -----
  genNSSend: selectorIndex numArgs: numArgs depth: depth sendTable: sendTable
  <var: #sendTable type: #'sqInt *'>
  | selector nsSendCache |
  self assert: (selectorIndex between: 0 and: (objectMemory literalCountOf: methodObj) - 1).
  selector := self getLiteral: selectorIndex.
  self assert: (objectMemory addressCouldBeOop: selector).
+ (objectMemory isNonImmediate: selector) ifTrue:
+ [self setHasMovableLiteral: true].
  (objectMemory isYoung: selector) ifTrue:
  [self setHasYoungReferent: true].
 
  nsSendCache := theIRCs + (NumOopsPerNSC * objectMemory bytesPerOop * indexOfIRC).
  indexOfIRC := indexOfIRC + 1.
  self assert: (objectMemory isInOldSpace: nsSendCache).
  self initializeNSSendCache: nsSendCache selector: selector numArgs: numArgs depth: depth.
 
  self ssAllocateCallReg: SendNumArgsReg.
 
  "This may leave the method receiver on the stack, which might not be the implicit receiver.
  But the lookup trampoline will establish an on-stack receiver once it locates it."
  self marshallAbsentReceiverSendArguments: numArgs.
 
  "Load the cache last so it is a fixed distance from the call."
  self MoveUniqueCw: nsSendCache R: SendNumArgsReg.
  self CallNewspeakSend: (sendTable at: (numArgs min: NumSendTrampolines - 1)).
 
  self voidReceiverOptStatus.
  self ssPushRegister: ReceiverResultReg.
  ^0!

Item was changed:
  ----- Method: StackToRegisterMappingCogit>>genPushLiteralVariable: (in category 'bytecode generator support') -----
  genPushLiteralVariable: literalIndex
  <inline: false>
  | association freeReg |
  association := self getLiteral: literalIndex.
  "If followed by a directed super send bytecode, avoid generating any code yet.
  The association will be passed to the directed send trampoline in a register
  and fully dereferenced only when first linked.  It will be ignored in later sends."
  BytecodeSetHasDirectedSuperSend ifTrue:
  [self deny: directedSendUsesBinding.
  self nextDescriptorExtensionsAndNextPCInto:
  [:descriptor :exta :extb :followingPC|
  (self isDirectedSuper: descriptor extA: exta extB: extb) ifTrue:
  [self ssPushConstant: association.
  directedSendUsesBinding := true.
  ^0]]].
  freeReg := self allocateRegNotConflictingWith: 0.
  "N.B. Do _not_ use ReceiverResultReg to avoid overwriting receiver in assignment in frameless methods."
  "So far descriptors are not rich enough to describe the entire dereference so generate the register
  load but don't push the result.  There is an order-of-evaluation issue if we defer the dereference."
  self genMoveConstant: association R: TempReg.
  objectRepresentation
- genEnsureObjInRegNotForwarded: TempReg
- scratchReg: freeReg.
- objectRepresentation
  genLoadSlot: ValueIndex
  sourceReg: TempReg
  destReg: freeReg.
  self ssPushRegister: freeReg.
  ^0!

Item was changed:
  ----- Method: VMClass class>>initializeWithOptions: (in category 'initialization') -----
  initializeWithOptions: optionsDictionaryOrArray
  "Initialize the receiver, typically initializing class variables. Initialize any class variables
  whose names occur in optionsDictionary with the corresponding values there-in."
  InitializationOptions := optionsDictionaryOrArray isArray
  ifTrue: [Dictionary newFromPairs: optionsDictionaryOrArray]
  ifFalse: [optionsDictionaryOrArray].
 
+ ExpensiveAsserts := InitializationOptions at: #ExpensiveAsserts ifAbsent: [false].
+ self optionClassNames do:
+ [:optionClassName|
+ InitializationOptions at: optionClassName ifAbsentPut: false]!
- ExpensiveAsserts := InitializationOptions at: #ExpensiveAsserts ifAbsent: [false]!

Item was added:
+ ----- Method: VMClass class>>optionClassNames (in category 'initialization') -----
+ optionClassNames
+ "Answer the names of all classes that appear in opption : pragmas.
+ These have to be set to false befpre seelectively being enabled for
+ the option: pragma to be correctly processed in shouldIncludeMethodForSelector:"
+ | optionClassNames block |
+ optionClassNames := Set new.
+ block :=
+ [:c|
+ c methodsDo:
+ [:m|
+ (m pragmaAt: #option:) ifNotNil:
+ [:p|
+ (p arguments first isSymbol
+  and: [(Smalltalk classNamed: p arguments first) notNil]) ifTrue:
+ [optionClassNames add: p arguments first]]]].
+ InterpreterPrimitives withAllSubclasses do: block.
+ CogObjectRepresentation withAllSubclasses do: block.
+ ^optionClassNames!

Item was changed:
  ----- Method: VMStructType class>>checkGenerateSurrogate:bytesPerWord: (in category 'code generation') -----
  checkGenerateSurrogate: surrogateClass bytesPerWord: bytesPerWord
  "Check the accessor methods for the fields of the receiver and if necessary install new
  or updated versions in the surrogate class alpng with the alignedByteSize class method."
 
  "CogBlockMethod checkGenerateSurrogate: CogBlockMethodSurrogate32 bytesPerWord: 4.
  CogMethod checkGenerateSurrogate: CogMethodSurrogate32 bytesPerWord: 4.
  CogBlockMethod checkGenerateSurrogate: CogBlockMethodSurrogate64 bytesPerWord: 8.
  CogMethod checkGenerateSurrogate: CogMethodSurrogate64 bytesPerWord: 8"
  | accessors oldBytesPerWord |
  oldBytesPerWord := BytesPerWord.
  accessors := [self fieldAccessorSourceFor: surrogateClass bytesPerWord: (BytesPerWord := bytesPerWord)]
  ensure: [BytesPerWord := oldBytesPerWord].
  accessors keysAndValuesDo:
  [:mr :source|
+ source ~= mr sourceStringOrNil ifTrue:
- source ~= mr sourceString ifTrue:
  [mr actualClass compile: source classified: #accessing]]
 
  "Dictionary withAll: ((self fieldAccessorSourceFor: surrogateClass bytesPerWord: bytesPerWord) associationsSelect:
  [:a| a value ~= a key sourceString])"!