2021-1-11 Meeting Meeting notes
Meeting Details
Topic: linaro-open-discussion meeting
Time: Jan 11 , 2020 10.00 AM London
Join Zoom Meeting
https://linaro-org.zoom.us/j/4417312160
Meeting ID: 441 731 2160
One tap mobile
+16465588656,,4417312160# US (New York)
+16699009128,,4417312160# US (San Jose)
Dial by your location
+1 646 558 8656 US (New York)
+1 669 900 9128 US (San Jose)
+1 253 215 8782 US (Tacoma)
+1 301 715 8592 US (Washington D.C)
+1 312 626 6799 US (Chicago)
+1 346 248 7799 US (Houston)
877 853 5247 US Toll-free
888 788 0099 US Toll-free
Meeting ID: 441 731 2160
Find your local number: https://linaro-org.zoom.us/u/aUcYpPkSC
Attendees
Mike
Tim Chen ( Intel )
Zhangfei Gao (Hisilicon LT)
Shameerali KT
Jammy
Mortenson (Arm)
Lorenzo (Arm)
Dietmar (Arm)
Valentine (Arm)
James Morse ()
Jonathan Cameron (Huawei)
Vincent
Barry Song (Huawei)
Chris Redpath (arm)
Sudeep Holla (arm)
Ulf
Bill F
Loic Pallardy (ST)
Guodong Xu (Hisilicon LT)
Salil Mehta (Huawei)
Mattias Brugger (SUSE ?)
ZengTao ( ? )
wanghuiqiang (Huawei)
Agenda
Confirmed topics
Lorenzo Pieralisi : Arm : Scheduler topology: cluster scheduler awareness -
Lorenzo Pieralisi : Arm : ACPI CPU hotplug
AOB
Jonathon
Meeting notes
Lorenzo Pieralisi : Arm : Scheduler topology: cluster scheduler awareness
Song Bao Hua
talking to slides
Better performance in single cluster vs multiple - lead to the investigation, used Jonathon patch to test Linux scheduler.
hackbench better with select_idle_sibling. it collates the waking up of a cluster
Vincent, Morten there are multiple topics
Morten
how to get information
scheduling policy - this first
Need a wider discussion. Made some comments on Friday
Vincent
Scheduler policy - adding scheduling level, I don't see any pushback and there is perf improvement
Sometimes you want to spread and other times cluster - HB there are 40 threads, there are few idle states, it is about spreading tasks. Patches should fit on Peter's wakeup patches, NUMA scheduling will be most challenging.
Dietmar - will this scan and fall back to standard LNC? (V3) VG: cant loop, but in a new approach, it is OK as you hit each CPU only once.
Morten, when is it good to pack vs spread them? What is the measure to understand which approach to take. And example two tasks with a lot of shared data will be a problem.
Vincent, need to look at previous CPU being idle. migrating may happen to move tasks from one cluster to another, need to complete task before load balance takes effect
JC: if this is spread across NUMA nodes we have many more than 40 CPUs
VG: There will not be any on the longer one.
DE: we don't have an example where we chose a policy based on the workload
Lorenzo - we have covered topic one of this 35 mins spent, let's move to the list where we can
Tim Chen
X86 will also need this concept
could have tasks that share memory without a wakeup relationship
SBH - userspace may need to provide information
JC: we can gather the info even if it is not accurate initially, we perhaps need to gather data separately
X86 does not use PPTT - but there are extra levels of topology information
TC we are using L2
JC we don't have a way to represent L3 tag cache
Die is no available to userspace
Any concerns on extra complexity?
LP: userspace should be architectural - JC: sort of agree, with generic naming, MR: why don't you just dump PPTT, JC: kernel is not always enabled to get dump MC: we could patch this.
SBH will send a V4 at this time
LP - AOB
Lorenzo Pieralisi : Arm : ACPI CPU hotplug
Progress with discussions - 2 PSCI solutions, QEmu changes or use ACPI as it is in X86 world
Working inside Arm as things firm up
For hotplug in a VM, suspect it relies on userspace - is actually align in userspace, Salil Mehta that is correct.
SM: at QEMU level you need an event, LP: in guest, there is something to be done, SM yes, we are using sysfs - Justin He was using VSOCK event -, we are doing it manually for testing
RMR - we sorted it out, need to get it done specs will follow
JC - don't re-enumerate PCI - discussion with Jean-Phillip