Discussion:
[PATCH v2 0/3] powerpc/mm/hash: Time improvements for memory hot(un)plug
Leonardo Bras
2021-04-30 14:36:04 UTC
Permalink
This patchset intends to reduce time needed for processing memory
hotplug/hotunplug in hash guests.

The first one, makes sure guests with pagesize over 4k don't need to
go through HPT resize-downs after memory hotplug.

The second and third patches make hotplug / hotunplug perform a single
HPT resize per operation, instead of one for each shift change, or one
for each LMB in case of resize-down error.

Why haven't the same mechanism used for both memory hotplug and hotunplug?
They both have different requirements:

Memory hotplug causes (usually) HPT resize-ups, which are fine happening
at the start of hotplug, but resize-ups should not ever be disabled, as
other mechanisms may try to increase memory, hitting issues with a HPT
that is too small.

Memory hotunplug causes HPT resize-downs, which can be disabled (HPT will
just remain larger for a while), but need to happen at the end of an
hotunplug operation. If we want to batch it, we need to disable
resize-downs and perform it only at the end.

Tests done with this patchset in the same machine / guest config:
Starting memory: 129GB, DIMM: 256GB
Before patchset: hotplug = 710s, hotunplug = 621s.
After patchset: hotplug = 21s, hotunplug = 100s.

Any feedback will be appreciated!

Changes since v1:
- Atomic used to disable resize was replaced by a mutex
- Removed wrappers, testing for !radix directly in hot(un)plug routine
- Added bounds to HPT resize loop
- Removed batching from dlpar_memory_*_by_index, as it adds a single LMB

Best regards,
Leonardo Bras (3):
powerpc/mm/hash: Avoid resizing-down HPT on first memory hotplug
powerpc/mm/hash: Avoid multiple HPT resize-ups on memory hotplug
powerpc/mm/hash: Avoid multiple HPT resize-downs on memory hotunplug

arch/powerpc/include/asm/book3s/64/hash.h | 4 +
arch/powerpc/mm/book3s64/hash_utils.c | 95 ++++++++++++++++---
.../platforms/pseries/hotplug-memory.c | 35 +++++++
3 files changed, 119 insertions(+), 15 deletions(-)
--
2.30.2
Leonardo Bras
2021-04-30 14:36:06 UTC
Permalink
Because hypervisors may need to create HPTs without knowing the guest
page size, the smallest used page-size (4k) may be chosen, resulting in
a HPT that is possibly bigger than needed.

On a guest with bigger page-sizes, the amount of entries for HTP may be
too high, causing the guest to ask for a HPT resize-down on the first
hotplug.

This becomes a problem when HPT resize-down fails, and causes the
HPT resize to be performed on every LMB added, until HPT size is
compatible to guest memory size, causing a major slowdown.

So, avoiding HPT resizing-down on hot-add significantly improves memory
hotplug times.

As an example, hotplugging 256GB on a 129GB guest took 710s without this
patch, and 21s after applied.

Signed-off-by: Leonardo Bras <***@gmail.com>
---
arch/powerpc/mm/book3s64/hash_utils.c | 36 ++++++++++++++++-----------
1 file changed, 21 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 581b20a2feaf..608e4ed397a9 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -795,7 +795,7 @@ static unsigned long __init htab_get_table_size(void)
}

#ifdef CONFIG_MEMORY_HOTPLUG
-static int resize_hpt_for_hotplug(unsigned long new_mem_size)
+static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
{
unsigned target_hpt_shift;

@@ -804,19 +804,25 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size)

target_hpt_shift = htab_shift_for_mem_size(new_mem_size);

- /*
- * To avoid lots of HPT resizes if memory size is fluctuating
- * across a boundary, we deliberately have some hysterisis
- * here: we immediately increase the HPT size if the target
- * shift exceeds the current shift, but we won't attempt to
- * reduce unless the target shift is at least 2 below the
- * current shift
- */
- if (target_hpt_shift > ppc64_pft_size ||
- target_hpt_shift < ppc64_pft_size - 1)
- return mmu_hash_ops.resize_hpt(target_hpt_shift);
+ if (shrinking) {

- return 0;
+ /*
+ * To avoid lots of HPT resizes if memory size is fluctuating
+ * across a boundary, we deliberately have some hysterisis
+ * here: we immediately increase the HPT size if the target
+ * shift exceeds the current shift, but we won't attempt to
+ * reduce unless the target shift is at least 2 below the
+ * current shift
+ */
+
+ if (target_hpt_shift >= ppc64_pft_size - 1)
+ return 0;
+
+ } else if (target_hpt_shift <= ppc64_pft_size) {
+ return 0;
+ }
+
+ return mmu_hash_ops.resize_hpt(target_hpt_shift);
}

int hash__create_section_mapping(unsigned long start, unsigned long end,
@@ -829,7 +835,7 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
return -1;
}

- resize_hpt_for_hotplug(memblock_phys_mem_size());
+ resize_hpt_for_hotplug(memblock_phys_mem_size(), false);

rc = htab_bolt_mapping(start, end, __pa(start),
pgprot_val(prot), mmu_linear_psize,
@@ -848,7 +854,7 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
int rc = htab_remove_mapping(start, end, mmu_linear_psize,
mmu_kernel_ssize);

- if (resize_hpt_for_hotplug(memblock_phys_mem_size()) == -ENOSPC)
+ if (resize_hpt_for_hotplug(memblock_phys_mem_size(), true) == -ENOSPC)
pr_warn("Hash collision while resizing HPT\n");

return rc;
--
2.30.2
David Gibson
2021-06-07 05:02:38 UTC
Permalink
Post by Leonardo Bras
Because hypervisors may need to create HPTs without knowing the guest
page size, the smallest used page-size (4k) may be chosen, resulting in
a HPT that is possibly bigger than needed.
On a guest with bigger page-sizes, the amount of entries for HTP may be
too high, causing the guest to ask for a HPT resize-down on the first
hotplug.
This becomes a problem when HPT resize-down fails, and causes the
HPT resize to be performed on every LMB added, until HPT size is
compatible to guest memory size, causing a major slowdown.
So, avoiding HPT resizing-down on hot-add significantly improves memory
hotplug times.
As an example, hotplugging 256GB on a 129GB guest took 710s without this
patch, and 21s after applied.
Sorry it's taken me so long to look at these

I don't love the extra statefulness that the 'shrinking' parameter
Post by Leonardo Bras
---
arch/powerpc/mm/book3s64/hash_utils.c | 36 ++++++++++++++++-----------
1 file changed, 21 insertions(+), 15 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 581b20a2feaf..608e4ed397a9 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -795,7 +795,7 @@ static unsigned long __init htab_get_table_size(void)
}
#ifdef CONFIG_MEMORY_HOTPLUG
-static int resize_hpt_for_hotplug(unsigned long new_mem_size)
+static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
{
unsigned target_hpt_shift;
@@ -804,19 +804,25 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size)
target_hpt_shift = htab_shift_for_mem_size(new_mem_size);
- /*
- * To avoid lots of HPT resizes if memory size is fluctuating
- * across a boundary, we deliberately have some hysterisis
- * here: we immediately increase the HPT size if the target
- * shift exceeds the current shift, but we won't attempt to
- * reduce unless the target shift is at least 2 below the
- * current shift
- */
- if (target_hpt_shift > ppc64_pft_size ||
- target_hpt_shift < ppc64_pft_size - 1)
- return mmu_hash_ops.resize_hpt(target_hpt_shift);
+ if (shrinking) {
- return 0;
+ /*
+ * To avoid lots of HPT resizes if memory size is fluctuating
+ * across a boundary, we deliberately have some hysterisis
+ * here: we immediately increase the HPT size if the target
+ * shift exceeds the current shift, but we won't attempt to
+ * reduce unless the target shift is at least 2 below the
+ * current shift
+ */
+
+ if (target_hpt_shift >= ppc64_pft_size - 1)
+ return 0;
+
+ } else if (target_hpt_shift <= ppc64_pft_size) {
+ return 0;
+ }
+
+ return mmu_hash_ops.resize_hpt(target_hpt_shift);
}
int hash__create_section_mapping(unsigned long start, unsigned long end,
@@ -829,7 +835,7 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
return -1;
}
- resize_hpt_for_hotplug(memblock_phys_mem_size());
+ resize_hpt_for_hotplug(memblock_phys_mem_size(), false);
rc = htab_bolt_mapping(start, end, __pa(start),
pgprot_val(prot), mmu_linear_psize,
@@ -848,7 +854,7 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
int rc = htab_remove_mapping(start, end, mmu_linear_psize,
mmu_kernel_ssize);
- if (resize_hpt_for_hotplug(memblock_phys_mem_size()) == -ENOSPC)
+ if (resize_hpt_for_hotplug(memblock_phys_mem_size(), true) == -ENOSPC)
pr_warn("Hash collision while resizing HPT\n");
return rc;
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
Leonardo Bras
2021-04-30 14:36:08 UTC
Permalink
Every time a memory hotplug happens, and the memory limit crosses a 2^n
value, it may be necessary to perform HPT resizing-up, which can take
some time (over 100ms in my tests).

It usually is not an issue, but it can take some time if a lot of memory
is added to a guest with little starting memory:
Adding 256G to a 2GB guest, for example will require 8 HPT resizes.

Perform an HPT resize before memory hotplug, updating HPT to its
final size (considering a successful hotplug), taking the number of
HPT resizes to at most one per memory hotplug action.

Signed-off-by: Leonardo Bras <***@gmail.com>
---
arch/powerpc/include/asm/book3s/64/hash.h | 2 ++
arch/powerpc/mm/book3s64/hash_utils.c | 20 +++++++++++++++++++
.../platforms/pseries/hotplug-memory.c | 9 +++++++++
3 files changed, 31 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index d959b0195ad9..fad4af8b8543 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -255,6 +255,8 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
int nid, pgprot_t prot);
int hash__remove_section_mapping(unsigned long start, unsigned long end);

+void hash_batch_expand_prepare(unsigned long newsize);
+
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_H */
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 608e4ed397a9..3fa395b3fe57 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -859,6 +859,26 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)

return rc;
}
+
+void hash_batch_expand_prepare(unsigned long newsize)
+{
+ const u64 starting_size = ppc64_pft_size;
+
+ /*
+ * Resizing-up HPT should never fail, but there are some cases system starts with higher
+ * SHIFT than required, and we go through the funny case of resizing HPT down while
+ * adding memory
+ */
+
+ while (resize_hpt_for_hotplug(newsize, false) == -ENOSPC) {
+ newsize *= 2;
+ pr_warn("Hash collision while resizing HPT\n");
+
+ /* Do not try to resize to the starting size, or bigger value */
+ if (htab_shift_for_mem_size(newsize) >= starting_size)
+ break;
+ }
+}
#endif /* CONFIG_MEMORY_HOTPLUG */

static void __init hash_init_partition_table(phys_addr_t hash_table,
diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
index 8377f1f7c78e..48b2cfe4ce69 100644
--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
+++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
@@ -13,6 +13,7 @@
#include <linux/memory.h>
#include <linux/memory_hotplug.h>
#include <linux/slab.h>
+#include <linux/pgtable.h>

#include <asm/firmware.h>
#include <asm/machdep.h>
@@ -671,6 +672,10 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
if (lmbs_available < lmbs_to_add)
return -EINVAL;

+ if (!radix_enabled())
+ hash_batch_expand_prepare(memblock_phys_mem_size() +
+ lmbs_to_add * drmem_lmb_size());
+
for_each_drmem_lmb(lmb) {
if (lmb->flags & DRCONF_MEM_ASSIGNED)
continue;
@@ -788,6 +793,10 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
if (lmbs_available < lmbs_to_add)
return -EINVAL;

+ if (!radix_enabled())
+ hash_batch_expand_prepare(memblock_phys_mem_size() +
+ lmbs_to_add * drmem_lmb_size());
+
for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
if (lmb->flags & DRCONF_MEM_ASSIGNED)
continue;
--
2.30.2
David Gibson
2021-06-07 05:10:00 UTC
Permalink
Post by Leonardo Bras
Every time a memory hotplug happens, and the memory limit crosses a 2^n
value, it may be necessary to perform HPT resizing-up, which can take
some time (over 100ms in my tests).
It usually is not an issue, but it can take some time if a lot of memory
Adding 256G to a 2GB guest, for example will require 8 HPT resizes.
Perform an HPT resize before memory hotplug, updating HPT to its
final size (considering a successful hotplug), taking the number of
HPT resizes to at most one per memory hotplug action.
---
arch/powerpc/include/asm/book3s/64/hash.h | 2 ++
arch/powerpc/mm/book3s64/hash_utils.c | 20 +++++++++++++++++++
.../platforms/pseries/hotplug-memory.c | 9 +++++++++
3 files changed, 31 insertions(+)
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index d959b0195ad9..fad4af8b8543 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -255,6 +255,8 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
int nid, pgprot_t prot);
int hash__remove_section_mapping(unsigned long start, unsigned long end);
+void hash_batch_expand_prepare(unsigned long newsize);
+
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_H */
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 608e4ed397a9..3fa395b3fe57 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -859,6 +859,26 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
return rc;
}
+
+void hash_batch_expand_prepare(unsigned long newsize)
+{
+ const u64 starting_size = ppc64_pft_size;
+
+ /*
+ * Resizing-up HPT should never fail, but there are some cases system starts with higher
+ * SHIFT than required, and we go through the funny case of resizing HPT down while
+ * adding memory
+ */
+
+ while (resize_hpt_for_hotplug(newsize, false) == -ENOSPC) {
+ newsize *= 2;
+ pr_warn("Hash collision while resizing HPT\n");
+
+ /* Do not try to resize to the starting size, or bigger value */
+ if (htab_shift_for_mem_size(newsize) >= starting_size)
+ break;
+ }
+}
#endif /* CONFIG_MEMORY_HOTPLUG */
static void __init hash_init_partition_table(phys_addr_t hash_table,
diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
index 8377f1f7c78e..48b2cfe4ce69 100644
--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
+++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
@@ -13,6 +13,7 @@
#include <linux/memory.h>
#include <linux/memory_hotplug.h>
#include <linux/slab.h>
+#include <linux/pgtable.h>
#include <asm/firmware.h>
#include <asm/machdep.h>
@@ -671,6 +672,10 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
if (lmbs_available < lmbs_to_add)
return -EINVAL;
+ if (!radix_enabled())
+ hash_batch_expand_prepare(memblock_phys_mem_size() +
+ lmbs_to_add * drmem_lmb_size());
+
for_each_drmem_lmb(lmb) {
if (lmb->flags & DRCONF_MEM_ASSIGNED)
continue;
@@ -788,6 +793,10 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
if (lmbs_available < lmbs_to_add)
return -EINVAL;
+ if (!radix_enabled())
+ hash_batch_expand_prepare(memblock_phys_mem_size() +
+ lmbs_to_add * drmem_lmb_size());
+
for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
if (lmb->flags & DRCONF_MEM_ASSIGNED)
continue;
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
Leonardo Bras
2021-04-30 14:36:10 UTC
Permalink
During memory hotunplug, after each LMB is removed, the HPT may be
resized-down if it would map a max of 4 times the current amount of memory.
(2 shifts, due to introduced histeresis)

It usually is not an issue, but it can take a lot of time if HPT
resizing-down fails. This happens because resize-down failures
usually repeat at each LMB removal, until there are no more bolted entries
conflict, which can take a while to happen.

This can be solved by doing a single HPT resize at the end of memory
hotunplug, after all requested entries are removed.

To make this happen, it's necessary to temporarily disable all HPT
resize-downs before hotunplug, re-enable them after hotunplug ends,
and then resize-down HPT to the current memory size.

As an example, hotunplugging 256GB from a 385GB guest took 621s without
this patch, and 100s after applied.

Signed-off-by: Leonardo Bras <***@gmail.com>
---
arch/powerpc/include/asm/book3s/64/hash.h | 2 +
arch/powerpc/mm/book3s64/hash_utils.c | 45 +++++++++++++++++--
.../platforms/pseries/hotplug-memory.c | 26 +++++++++++
3 files changed, 70 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index fad4af8b8543..6cd66e7e98c9 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -256,6 +256,8 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
int hash__remove_section_mapping(unsigned long start, unsigned long end);

void hash_batch_expand_prepare(unsigned long newsize);
+void hash_batch_shrink_begin(void);
+void hash_batch_shrink_end(void);

#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 3fa395b3fe57..73ecd0f61acd 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -795,6 +795,9 @@ static unsigned long __init htab_get_table_size(void)
}

#ifdef CONFIG_MEMORY_HOTPLUG
+
+static DEFINE_MUTEX(hpt_resize_down_lock);
+
static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
{
unsigned target_hpt_shift;
@@ -805,7 +808,7 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
target_hpt_shift = htab_shift_for_mem_size(new_mem_size);

if (shrinking) {
-
+ int ret;
/*
* To avoid lots of HPT resizes if memory size is fluctuating
* across a boundary, we deliberately have some hysterisis
@@ -818,10 +821,20 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
if (target_hpt_shift >= ppc64_pft_size - 1)
return 0;

- } else if (target_hpt_shift <= ppc64_pft_size) {
- return 0;
+ /* When batch removing entries, only resizes HPT at the end. */
+
+ if (!mutex_trylock(&hpt_resize_down_lock))
+ return 0;
+
+ ret = mmu_hash_ops.resize_hpt(target_hpt_shift);
+
+ mutex_unlock(&hpt_resize_down_lock);
+ return ret;
}

+ if (target_hpt_shift <= ppc64_pft_size)
+ return 0;
+
return mmu_hash_ops.resize_hpt(target_hpt_shift);
}

@@ -879,6 +892,32 @@ void hash_batch_expand_prepare(unsigned long newsize)
break;
}
}
+
+void hash_batch_shrink_begin(void)
+{
+ /* Disable HPT resize-down during hot-unplug */
+ mutex_lock(&hpt_resize_down_lock);
+}
+
+void hash_batch_shrink_end(void)
+{
+ const u64 starting_size = ppc64_pft_size;
+ unsigned long newsize;
+
+ newsize = memblock_phys_mem_size();
+ /* Resize to smallest SHIFT possible */
+ while (resize_hpt_for_hotplug(newsize, true) == -ENOSPC) {
+ newsize *= 2;
+ pr_warn("Hash collision while resizing HPT\n");
+
+ /* Do not try to resize to the starting size, or bigger value */
+ if (htab_shift_for_mem_size(newsize) >= starting_size)
+ break;
+ }
+
+ /* Re-enables HPT resize-down after hot-unplug */
+ mutex_unlock(&hpt_resize_down_lock);
+}
#endif /* CONFIG_MEMORY_HOTPLUG */

static void __init hash_init_partition_table(phys_addr_t hash_table,
diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
index 48b2cfe4ce69..44bc50d72353 100644
--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
+++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
@@ -426,6 +426,9 @@ static int dlpar_memory_remove_by_count(u32 lmbs_to_remove)
return -EINVAL;
}

+ if (!radix_enabled())
+ hash_batch_shrink_begin();
+
for_each_drmem_lmb(lmb) {
rc = dlpar_remove_lmb(lmb);
if (rc)
@@ -471,6 +474,9 @@ static int dlpar_memory_remove_by_count(u32 lmbs_to_remove)
rc = 0;
}

+ if (!radix_enabled())
+ hash_batch_shrink_end();
+
return rc;
}

@@ -533,6 +539,9 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
if (lmbs_available < lmbs_to_remove)
return -EINVAL;

+ if (!radix_enabled())
+ hash_batch_shrink_begin();
+
for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
if (!(lmb->flags & DRCONF_MEM_ASSIGNED))
continue;
@@ -573,6 +582,9 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
}
}

+ if (!radix_enabled())
+ hash_batch_shrink_end();
+
return rc;
}

@@ -703,6 +715,9 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
if (lmbs_added != lmbs_to_add) {
pr_err("Memory hot-add failed, removing any added LMBs\n");

+ if (!radix_enabled())
+ hash_batch_shrink_begin();
+
for_each_drmem_lmb(lmb) {
if (!drmem_lmb_reserved(lmb))
continue;
@@ -716,6 +731,10 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)

drmem_remove_lmb_reservation(lmb);
}
+
+ if (!radix_enabled())
+ hash_batch_shrink_end();
+
rc = -EINVAL;
} else {
for_each_drmem_lmb(lmb) {
@@ -817,6 +836,9 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
if (rc) {
pr_err("Memory indexed-count-add failed, removing any added LMBs\n");

+ if (!radix_enabled())
+ hash_batch_shrink_begin();
+
for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
if (!drmem_lmb_reserved(lmb))
continue;
@@ -830,6 +852,10 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)

drmem_remove_lmb_reservation(lmb);
}
+
+ if (!radix_enabled())
+ hash_batch_shrink_end();
+
rc = -EINVAL;
} else {
for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
--
2.30.2
David Gibson
2021-06-07 05:20:12 UTC
Permalink
Post by Leonardo Bras
During memory hotunplug, after each LMB is removed, the HPT may be
resized-down if it would map a max of 4 times the current amount of memory.
(2 shifts, due to introduced histeresis)
It usually is not an issue, but it can take a lot of time if HPT
resizing-down fails. This happens because resize-down failures
usually repeat at each LMB removal, until there are no more bolted entries
conflict, which can take a while to happen.
This can be solved by doing a single HPT resize at the end of memory
hotunplug, after all requested entries are removed.
To make this happen, it's necessary to temporarily disable all HPT
resize-downs before hotunplug, re-enable them after hotunplug ends,
and then resize-down HPT to the current memory size.
As an example, hotunplugging 256GB from a 385GB guest took 621s without
this patch, and 100s after applied.
Hrm. This looks correct, but it seems overly complicated.

AFAICT, the resize calls that this adds should in practice be the
*only* times we call resize, all the calls from the lower level code
should be suppressed. In which case can't we just remove those calls
entirely, and not deal with the clunky locking and exclusion here.
That should also remove the need for the 'shrinking' parameter in 1/3.
Post by Leonardo Bras
---
arch/powerpc/include/asm/book3s/64/hash.h | 2 +
arch/powerpc/mm/book3s64/hash_utils.c | 45 +++++++++++++++++--
.../platforms/pseries/hotplug-memory.c | 26 +++++++++++
3 files changed, 70 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index fad4af8b8543..6cd66e7e98c9 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -256,6 +256,8 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
int hash__remove_section_mapping(unsigned long start, unsigned long end);
void hash_batch_expand_prepare(unsigned long newsize);
+void hash_batch_shrink_begin(void);
+void hash_batch_shrink_end(void);
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 3fa395b3fe57..73ecd0f61acd 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -795,6 +795,9 @@ static unsigned long __init htab_get_table_size(void)
}
#ifdef CONFIG_MEMORY_HOTPLUG
+
+static DEFINE_MUTEX(hpt_resize_down_lock);
+
static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
{
unsigned target_hpt_shift;
@@ -805,7 +808,7 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
target_hpt_shift = htab_shift_for_mem_size(new_mem_size);
if (shrinking) {
-
+ int ret;
/*
* To avoid lots of HPT resizes if memory size is fluctuating
* across a boundary, we deliberately have some hysterisis
@@ -818,10 +821,20 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
if (target_hpt_shift >= ppc64_pft_size - 1)
return 0;
- } else if (target_hpt_shift <= ppc64_pft_size) {
- return 0;
+ /* When batch removing entries, only resizes HPT at the end. */
+
+ if (!mutex_trylock(&hpt_resize_down_lock))
+ return 0;
+
+ ret = mmu_hash_ops.resize_hpt(target_hpt_shift);
+
+ mutex_unlock(&hpt_resize_down_lock);
+ return ret;
}
+ if (target_hpt_shift <= ppc64_pft_size)
+ return 0;
+
return mmu_hash_ops.resize_hpt(target_hpt_shift);
}
@@ -879,6 +892,32 @@ void hash_batch_expand_prepare(unsigned long newsize)
break;
}
}
+
+void hash_batch_shrink_begin(void)
+{
+ /* Disable HPT resize-down during hot-unplug */
+ mutex_lock(&hpt_resize_down_lock);
+}
+
+void hash_batch_shrink_end(void)
+{
+ const u64 starting_size = ppc64_pft_size;
+ unsigned long newsize;
+
+ newsize = memblock_phys_mem_size();
+ /* Resize to smallest SHIFT possible */
+ while (resize_hpt_for_hotplug(newsize, true) == -ENOSPC) {
+ newsize *= 2;
+ pr_warn("Hash collision while resizing HPT\n");
+
+ /* Do not try to resize to the starting size, or bigger value */
+ if (htab_shift_for_mem_size(newsize) >= starting_size)
+ break;
+ }
+
+ /* Re-enables HPT resize-down after hot-unplug */
+ mutex_unlock(&hpt_resize_down_lock);
+}
#endif /* CONFIG_MEMORY_HOTPLUG */
static void __init hash_init_partition_table(phys_addr_t hash_table,
diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
index 48b2cfe4ce69..44bc50d72353 100644
--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
+++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
@@ -426,6 +426,9 @@ static int dlpar_memory_remove_by_count(u32 lmbs_to_remove)
return -EINVAL;
}
+ if (!radix_enabled())
+ hash_batch_shrink_begin();
+
for_each_drmem_lmb(lmb) {
rc = dlpar_remove_lmb(lmb);
if (rc)
@@ -471,6 +474,9 @@ static int dlpar_memory_remove_by_count(u32 lmbs_to_remove)
rc = 0;
}
+ if (!radix_enabled())
+ hash_batch_shrink_end();
+
return rc;
}
@@ -533,6 +539,9 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
if (lmbs_available < lmbs_to_remove)
return -EINVAL;
+ if (!radix_enabled())
+ hash_batch_shrink_begin();
+
for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
if (!(lmb->flags & DRCONF_MEM_ASSIGNED))
continue;
@@ -573,6 +582,9 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
}
}
+ if (!radix_enabled())
+ hash_batch_shrink_end();
+
return rc;
}
@@ -703,6 +715,9 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
if (lmbs_added != lmbs_to_add) {
pr_err("Memory hot-add failed, removing any added LMBs\n");
+ if (!radix_enabled())
+ hash_batch_shrink_begin();
+
for_each_drmem_lmb(lmb) {
if (!drmem_lmb_reserved(lmb))
continue;
@@ -716,6 +731,10 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
drmem_remove_lmb_reservation(lmb);
}
+
+ if (!radix_enabled())
+ hash_batch_shrink_end();
+
rc = -EINVAL;
} else {
for_each_drmem_lmb(lmb) {
@@ -817,6 +836,9 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
if (rc) {
pr_err("Memory indexed-count-add failed, removing any added LMBs\n");
+ if (!radix_enabled())
+ hash_batch_shrink_begin();
+
for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
if (!drmem_lmb_reserved(lmb))
continue;
@@ -830,6 +852,10 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
drmem_remove_lmb_reservation(lmb);
}
+
+ if (!radix_enabled())
+ hash_batch_shrink_end();
+
rc = -EINVAL;
} else {
for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
Leonardo Bras
2021-04-30 14:41:59 UTC
Permalink
CC: David Gibson

http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=241574&state=%2A&archive=both
Loading...