Ausnahme gefangen: SSL certificate problem: certificate is not yet valid ๐Ÿ“Œ Efficient And Safe Allocations Everywhere!

๐Ÿ  Team IT Security News

TSecurity.de ist eine Online-Plattform, die sich auf die Bereitstellung von Informationen,alle 15 Minuten neuste Nachrichten, Bildungsressourcen und Dienstleistungen rund um das Thema IT-Sicherheit spezialisiert hat.
Ob es sich um aktuelle Nachrichten, Fachartikel, Blogbeitrรคge, Webinare, Tutorials, oder Tipps & Tricks handelt, TSecurity.de bietet seinen Nutzern einen umfassenden รœberblick รผber die wichtigsten Aspekte der IT-Sicherheit in einer sich stรคndig verรคndernden digitalen Welt.

16.12.2023 - TIP: Wer den Cookie Consent Banner akzeptiert, kann z.B. von Englisch nach Deutsch รผbersetzen, erst Englisch auswรคhlen dann wieder Deutsch!

Google Android Playstore Download Button fรผr Team IT Security



๐Ÿ“š Efficient And Safe Allocations Everywhere!


๐Ÿ’ก Newskategorie: Programmierung
๐Ÿ”— Quelle: blog.chromium.org

In our constant work to improve performance, our engineers sometimes have to seek optimizations in places that most software developers donโ€™t venture. In this post in our series, The Fast and The Curious, a team of senior engineers showed how they approached replacing the system-level memory allocator with an optimized version, yielding significant memory savings -- up to 22% on Windows.


PartitionAlloc is Chromiumโ€™s memory allocator, designed for lower fragmentation, higher speed, and stronger security and has been used extensively within Blink (Chromiumโ€™s rendering engine). In Chrome 89 the entire Chromium codebase transitioned to using PartitionAlloc everywhere (by intercepting and replacing malloc() and new) on Windows 64-bit and Android. Data from the field demonstrates up to 22% memory savings, and up to 9% improvement in responsiveness and scroll latency of Chrome.



Platform

Memory Savings

Speedup

Browser Process

Renderer

Responsiveness

Scroll Latency

Android

8%ย 

4%ย 

4% less jank*

5% faster

Windows 64-bit

22%ย 

8%

9% less jank*

5% faster



Here's a closer look at memory usage in the browser process for Windows as the M89 release began rolling out in early March:






Background

Chrome is a multi-platform, multi-process, multi-threaded application, serving a wide range of needs, from small embedded WebViews on Android to spacecraft. Performance and memory footprint are of critical importance, requiring a tight integration between Chrome and its memory allocator. But heterogeneity across platforms can be prohibitive with each platform having a different implementation such as tcmalloc on Linux and Chrome OS, jemalloc or scudo on Android, and LFH on Windows.


When we started this project, our goals were to: 1) unify memory allocation across platforms, 2) target the lowest memory footprint without compromising security and performance, and 3) tailor the allocator to optimize the performance of Chrome. Thus we made the decision to use Chromiumโ€™s cross-platform allocator, to optimize memory usage for client rather than server workloads and to focus on meaningful end user activities, not micro-benchmarks that wouldnโ€™t really matter in real world usage.




Allocator Security


PartitionAlloc was designed to support multiple independent partitions, i.e. non-overlapping regions of memory. We use these partitions throughout Blink to thwart some forms of type confusion attacks, such as ensuring strings are separated from layout objects. However, this approach only avoids collisions between types that are allocated from different partitions. Furthermore, PartitionAlloc buckets allocations by their sizes, to help avoid type confusion when potentially-colliding objects are of dissimilar size. These techniques work because PartitionAlloc doesnโ€™t re-use address space. Once PartitionAlloc dedicates a region of address space to a certain partition and size bucket, it will always belong to that partition and size bucket.


Additionally, PartitionAlloc protects some of its metadata with guard pages (inaccessible ranges) around memory regions. Not all metadata is equal, however: free-list entries are stored within previously allocated regions, and thus surrounded by other allocations. To detect corrupted free-list entries and off-by-one overflows from client code, we encode and shadow them.
Finally, having our own allocator enables advanced security features like MiraclePtr and *Scan.



Architecture Details

Each partition in PartitionAlloc uses a single, central, slab-based allocator to conserve memory, with a minimal per-thread cache in front for scaling to multi-threaded workloads. This simplicity also pays performance dividends: weโ€™ve extensively profiled and aggressively trimmed the allocatorโ€™s fast path, improving thread-local storage access, locks, reducing cache line fetches, and removing branches.

PartitionAlloc pre-reserves slabs of virtual address space. They are gradually backed by physical memory, as allocation requests arrive. Small and medium-sized allocations are grouped in geometrically-spaced, size-segregated buckets, e.g. [241; 256], [257; 288]. Each slab is split into regions (called โ€œslot spansโ€) that satisfy allocations (โ€œslotsโ€) from only one particular bucket, thereby increasing cache locality while lowering fragmentation. Conversely, larger allocations donโ€™t go through the bucket logic and are fulfilled using the operating systemโ€™s primitives directly (mmap() on POSIX systems, and VirtualAlloc() on Windows).

This central allocator is protected by a single per-partition lock. To mitigate the scalability problem arising from contention, we add a small, per-thread cache of small slots in front, yielding a three-tiered architecture:





The first layer (Per-thread cache) holds a small amount of slots belonging to smaller and more commonly used buckets. Because these slots are stored per-thread, they can be allocated without a lock and only requiring a faster thread-local storage lookup, improving cache locality in the process. The per-thread cache has been tailored to satisfy the majority of requests by allocating from and releasing memory to the second layer in batches, amortizing lock acquisition, and further improving locality while not trapping excess memory.

The second layer (Slot span free-lists) is invoked upon a per-thread cache miss. For each bucket size, PartitionAlloc knows a slot span with free slots associated with that size, and captures a slot from the free-list of that span. This is still a fast path, but slower than per-thread cache as it requires taking a lock. However, this section is only hit for larger allocations not supported by per-thread cache, or as a batch to fill the per-thread cache.

Finally, if there are no free slots in the bucket, the third layer (Slot span management) either carves out space from a slab for a new slot span, or allocates an entirely new slab from the operating system, which is a slow but very infrequent operation.

The overall performance and space-efficiency of the allocator hinges on the many tradeoffs across its layers such as how much to cache, how many buckets, and memory reclaiming policy. Please refer to PartitionAlloc to learn more about the design.

All in all, we hope you will enjoy the additional memory savings and performance improvements brought by PartitionAlloc, ensuring a safer, leaner, and faster Chrome for users on Earth and in outer space alike. Stay tuned for further improvements, and support of more platforms coming in the near future.

Posted by Benoรฎt Lizรฉ and Bartek Nowierski, Chrome Software Engineers

Data source for all statistics: Real-world data anonymously aggregated from Chrome clients.
*The core metric measures jank -- delay handling user input -- every 30 seconds.

...



๐Ÿ“Œ Efficient And Safe Allocations Everywhere!


๐Ÿ“ˆ 67.47 Punkte

๐Ÿ“Œ Bugtraq: glibc catopen() Multiple unbounded stack allocations


๐Ÿ“ˆ 29.88 Punkte

๐Ÿ“Œ glibc catopen() Unbounded Stack Allocations


๐Ÿ“ˆ 29.88 Punkte

๐Ÿ“Œ Bugtraq: glibc catopen() Multiple unbounded stack allocations


๐Ÿ“ˆ 29.88 Punkte

๐Ÿ“Œ glibc catopen() Unbounded Stack Allocations


๐Ÿ“ˆ 29.88 Punkte

๐Ÿ“Œ How 2023 cybersecurity budget allocations are shaping up


๐Ÿ“ˆ 29.88 Punkte

๐Ÿ“Œ Medium CVE-2020-36537: Everywhere Everywhere cms


๐Ÿ“ˆ 28.9 Punkte

๐Ÿ“Œ TVnow Downloader: Safe & Efficient Apps for Offline Viewing


๐Ÿ“ˆ 21.36 Punkte

๐Ÿ“Œ DEF CON Safe Mode - The Dark Tangent and Lostboy - Welcome to DEF CON Safe Mode and Badge Talk


๐Ÿ“ˆ 20.04 Punkte

๐Ÿ“Œ The return to work: Be safe and feel safe


๐Ÿ“ˆ 18.25 Punkte

๐Ÿ“Œ Keep your everyday valuables safe from damage and loss with this award-winning safe


๐Ÿ“ˆ 18.25 Punkte

๐Ÿ“Œ Why Modern C++ Still Isn't As Safe As Memory-Safe Languages Like Rust and Swift


๐Ÿ“ˆ 18.25 Punkte

๐Ÿ“Œ Why Modern C++ Still Isn't As Safe As Memory-Safe Languages Like Rust and Swift


๐Ÿ“ˆ 18.25 Punkte

๐Ÿ“Œ SAFe 6.0 and SAFe Studio Platform released to help organizations achieve business agility


๐Ÿ“ˆ 18.25 Punkte

๐Ÿ“Œ Ivanti Extends Neurons Platform to Enable Everywhere Work and Provide Exceptional and Secure Employee Self-Service


๐Ÿ“ˆ 18.02 Punkte

๐Ÿ“Œ AWS and Hugging Face collaborate to make generative AI more accessible and cost efficient


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ Quick and efficient Tmux session and window switching


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ Distributed training and efficient scaling with the Amazon SageMaker Model Parallel and Data Parallel Libraries


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ Introducing dataflow templates; A quick and efficient way to build your sales leaderboard and get visibility over your sales pipeline


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ How AI is being used to detect and fight ransomware attacks, and how criminals could use AI to plot more efficient ransomware attacks


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ Delta unveils energy-efficient solutions for 5G and IoT edge, e-mobility, and smart manufacturing


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ Go: Writing and debugging fast, reliable and efficient software | VS Code Livestreams


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ Elastic, Scalable, and Efficient High-Performance Computing With Ampereยฎ Altra, Altra Max and SUSE Linux Enterprise


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ "Geographical Demand Data Extraction: Web Automation and Efficient Data Handling with Python, Selenium, and BeautifulSoup" ๐Ÿš€โœจ


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ Generative AI and MLOps: A Powerful Combination for Efficient and Effective AI Development


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ Researchers from Kyung Hee University and Nota Unveil MobileSAMv2: A Breakthrough in Efficient and Rapid Image Segmentation


๐Ÿ“ˆ 16.69 Punkte

๐Ÿ“Œ How to Leverage SvelteKit, Skeleton, and Chart.js for Rapid Prototyping and Efficient Execution


๐Ÿ“ˆ 16.69 Punkte











matomo