17c478bd9Sstevel@tonic-gate /* 27c478bd9Sstevel@tonic-gate * CDDL HEADER START 37c478bd9Sstevel@tonic-gate * 47c478bd9Sstevel@tonic-gate * The contents of this file are subject to the terms of the 57d692464Sdp * Common Development and Distribution License (the "License"). 67d692464Sdp * You may not use this file except in compliance with the License. 77c478bd9Sstevel@tonic-gate * 87c478bd9Sstevel@tonic-gate * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 97c478bd9Sstevel@tonic-gate * or http://www.opensolaris.org/os/licensing. 107c478bd9Sstevel@tonic-gate * See the License for the specific language governing permissions 117c478bd9Sstevel@tonic-gate * and limitations under the License. 127c478bd9Sstevel@tonic-gate * 137c478bd9Sstevel@tonic-gate * When distributing Covered Code, include this CDDL HEADER in each 147c478bd9Sstevel@tonic-gate * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 157c478bd9Sstevel@tonic-gate * If applicable, add the following below this CDDL HEADER, with the 167c478bd9Sstevel@tonic-gate * fields enclosed by brackets "[]" replaced with your own identifying 177c478bd9Sstevel@tonic-gate * information: Portions Copyright [yyyy] [name of copyright owner] 187c478bd9Sstevel@tonic-gate * 197c478bd9Sstevel@tonic-gate * CDDL HEADER END 207c478bd9Sstevel@tonic-gate */ 217c478bd9Sstevel@tonic-gate /* 22dce01e3fSJonathan W Adams * Copyright 2009 Sun Microsystems, Inc. All rights reserved. 237c478bd9Sstevel@tonic-gate * Use is subject to license terms. 247c478bd9Sstevel@tonic-gate */ 257c478bd9Sstevel@tonic-gate 267c478bd9Sstevel@tonic-gate /* 27b5fca8f8Stomee * Kernel memory allocator, as described in the following two papers and a 28b5fca8f8Stomee * statement about the consolidator: 297c478bd9Sstevel@tonic-gate * 307c478bd9Sstevel@tonic-gate * Jeff Bonwick, 317c478bd9Sstevel@tonic-gate * The Slab Allocator: An Object-Caching Kernel Memory Allocator. 327c478bd9Sstevel@tonic-gate * Proceedings of the Summer 1994 Usenix Conference. 337c478bd9Sstevel@tonic-gate * Available as /shared/sac/PSARC/1994/028/materials/kmem.pdf. 347c478bd9Sstevel@tonic-gate * 357c478bd9Sstevel@tonic-gate * Jeff Bonwick and Jonathan Adams, 367c478bd9Sstevel@tonic-gate * Magazines and vmem: Extending the Slab Allocator to Many CPUs and 377c478bd9Sstevel@tonic-gate * Arbitrary Resources. 387c478bd9Sstevel@tonic-gate * Proceedings of the 2001 Usenix Conference. 397c478bd9Sstevel@tonic-gate * Available as /shared/sac/PSARC/2000/550/materials/vmem.pdf. 40b5fca8f8Stomee * 41b5fca8f8Stomee * kmem Slab Consolidator Big Theory Statement: 42b5fca8f8Stomee * 43b5fca8f8Stomee * 1. Motivation 44b5fca8f8Stomee * 45b5fca8f8Stomee * As stated in Bonwick94, slabs provide the following advantages over other 46b5fca8f8Stomee * allocation structures in terms of memory fragmentation: 47b5fca8f8Stomee * 48b5fca8f8Stomee * - Internal fragmentation (per-buffer wasted space) is minimal. 49b5fca8f8Stomee * - Severe external fragmentation (unused buffers on the free list) is 50b5fca8f8Stomee * unlikely. 51b5fca8f8Stomee * 52b5fca8f8Stomee * Segregating objects by size eliminates one source of external fragmentation, 53b5fca8f8Stomee * and according to Bonwick: 54b5fca8f8Stomee * 55b5fca8f8Stomee * The other reason that slabs reduce external fragmentation is that all 56b5fca8f8Stomee * objects in a slab are of the same type, so they have the same lifetime 57b5fca8f8Stomee * distribution. The resulting segregation of short-lived and long-lived 58b5fca8f8Stomee * objects at slab granularity reduces the likelihood of an entire page being 59b5fca8f8Stomee * held hostage due to a single long-lived allocation [Barrett93, Hanson90]. 60b5fca8f8Stomee * 61b5fca8f8Stomee * While unlikely, severe external fragmentation remains possible. Clients that 62b5fca8f8Stomee * allocate both short- and long-lived objects from the same cache cannot 63b5fca8f8Stomee * anticipate the distribution of long-lived objects within the allocator's slab 64b5fca8f8Stomee * implementation. Even a small percentage of long-lived objects distributed 65b5fca8f8Stomee * randomly across many slabs can lead to a worst case scenario where the client 66b5fca8f8Stomee * frees the majority of its objects and the system gets back almost none of the 67b5fca8f8Stomee * slabs. Despite the client doing what it reasonably can to help the system 68b5fca8f8Stomee * reclaim memory, the allocator cannot shake free enough slabs because of 69b5fca8f8Stomee * lonely allocations stubbornly hanging on. Although the allocator is in a 70b5fca8f8Stomee * position to diagnose the fragmentation, there is nothing that the allocator 71b5fca8f8Stomee * by itself can do about it. It only takes a single allocated object to prevent 72b5fca8f8Stomee * an entire slab from being reclaimed, and any object handed out by 73b5fca8f8Stomee * kmem_cache_alloc() is by definition in the client's control. Conversely, 74b5fca8f8Stomee * although the client is in a position to move a long-lived object, it has no 75b5fca8f8Stomee * way of knowing if the object is causing fragmentation, and if so, where to 76b5fca8f8Stomee * move it. A solution necessarily requires further cooperation between the 77b5fca8f8Stomee * allocator and the client. 78b5fca8f8Stomee * 79b5fca8f8Stomee * 2. Move Callback 80b5fca8f8Stomee * 81b5fca8f8Stomee * The kmem slab consolidator therefore adds a move callback to the 82b5fca8f8Stomee * allocator/client interface, improving worst-case external fragmentation in 83b5fca8f8Stomee * kmem caches that supply a function to move objects from one memory location 84b5fca8f8Stomee * to another. In a situation of low memory kmem attempts to consolidate all of 85b5fca8f8Stomee * a cache's slabs at once; otherwise it works slowly to bring external 86b5fca8f8Stomee * fragmentation within the 1/8 limit guaranteed for internal fragmentation, 87b5fca8f8Stomee * thereby helping to avoid a low memory situation in the future. 88b5fca8f8Stomee * 89b5fca8f8Stomee * The callback has the following signature: 90b5fca8f8Stomee * 91b5fca8f8Stomee * kmem_cbrc_t move(void *old, void *new, size_t size, void *user_arg) 92b5fca8f8Stomee * 93b5fca8f8Stomee * It supplies the kmem client with two addresses: the allocated object that 94b5fca8f8Stomee * kmem wants to move and a buffer selected by kmem for the client to use as the 95b5fca8f8Stomee * copy destination. The callback is kmem's way of saying "Please get off of 96b5fca8f8Stomee * this buffer and use this one instead." kmem knows where it wants to move the 97b5fca8f8Stomee * object in order to best reduce fragmentation. All the client needs to know 98b5fca8f8Stomee * about the second argument (void *new) is that it is an allocated, constructed 99b5fca8f8Stomee * object ready to take the contents of the old object. When the move function 100b5fca8f8Stomee * is called, the system is likely to be low on memory, and the new object 101b5fca8f8Stomee * spares the client from having to worry about allocating memory for the 102b5fca8f8Stomee * requested move. The third argument supplies the size of the object, in case a 103b5fca8f8Stomee * single move function handles multiple caches whose objects differ only in 104b5fca8f8Stomee * size (such as zio_buf_512, zio_buf_1024, etc). Finally, the same optional 105b5fca8f8Stomee * user argument passed to the constructor, destructor, and reclaim functions is 106b5fca8f8Stomee * also passed to the move callback. 107b5fca8f8Stomee * 108b5fca8f8Stomee * 2.1 Setting the Move Callback 109b5fca8f8Stomee * 110b5fca8f8Stomee * The client sets the move callback after creating the cache and before 111b5fca8f8Stomee * allocating from it: 112b5fca8f8Stomee * 113b5fca8f8Stomee * object_cache = kmem_cache_create(...); 114b5fca8f8Stomee * kmem_cache_set_move(object_cache, object_move); 115b5fca8f8Stomee * 116b5fca8f8Stomee * 2.2 Move Callback Return Values 117b5fca8f8Stomee * 118b5fca8f8Stomee * Only the client knows about its own data and when is a good time to move it. 119b5fca8f8Stomee * The client is cooperating with kmem to return unused memory to the system, 120b5fca8f8Stomee * and kmem respectfully accepts this help at the client's convenience. When 121b5fca8f8Stomee * asked to move an object, the client can respond with any of the following: 122b5fca8f8Stomee * 123b5fca8f8Stomee * typedef enum kmem_cbrc { 124b5fca8f8Stomee * KMEM_CBRC_YES, 125b5fca8f8Stomee * KMEM_CBRC_NO, 126b5fca8f8Stomee * KMEM_CBRC_LATER, 127b5fca8f8Stomee * KMEM_CBRC_DONT_NEED, 128b5fca8f8Stomee * KMEM_CBRC_DONT_KNOW 129b5fca8f8Stomee * } kmem_cbrc_t; 130b5fca8f8Stomee * 131b5fca8f8Stomee * The client must not explicitly kmem_cache_free() either of the objects passed 132b5fca8f8Stomee * to the callback, since kmem wants to free them directly to the slab layer 133b5fca8f8Stomee * (bypassing the per-CPU magazine layer). The response tells kmem which of the 134b5fca8f8Stomee * objects to free: 135b5fca8f8Stomee * 136b5fca8f8Stomee * YES: (Did it) The client moved the object, so kmem frees the old one. 137b5fca8f8Stomee * NO: (Never) The client refused, so kmem frees the new object (the 138b5fca8f8Stomee * unused copy destination). kmem also marks the slab of the old 139b5fca8f8Stomee * object so as not to bother the client with further callbacks for 140b5fca8f8Stomee * that object as long as the slab remains on the partial slab list. 141b5fca8f8Stomee * (The system won't be getting the slab back as long as the 142b5fca8f8Stomee * immovable object holds it hostage, so there's no point in moving 143b5fca8f8Stomee * any of its objects.) 144b5fca8f8Stomee * LATER: The client is using the object and cannot move it now, so kmem 145b5fca8f8Stomee * frees the new object (the unused copy destination). kmem still 146b5fca8f8Stomee * attempts to move other objects off the slab, since it expects to 147b5fca8f8Stomee * succeed in clearing the slab in a later callback. The client 148b5fca8f8Stomee * should use LATER instead of NO if the object is likely to become 149b5fca8f8Stomee * movable very soon. 150b5fca8f8Stomee * DONT_NEED: The client no longer needs the object, so kmem frees the old along 151b5fca8f8Stomee * with the new object (the unused copy destination). This response 152b5fca8f8Stomee * is the client's opportunity to be a model citizen and give back as 153b5fca8f8Stomee * much as it can. 154b5fca8f8Stomee * DONT_KNOW: The client does not know about the object because 155b5fca8f8Stomee * a) the client has just allocated the object and not yet put it 156b5fca8f8Stomee * wherever it expects to find known objects 157b5fca8f8Stomee * b) the client has removed the object from wherever it expects to 158b5fca8f8Stomee * find known objects and is about to free it, or 159b5fca8f8Stomee * c) the client has freed the object. 160b5fca8f8Stomee * In all these cases (a, b, and c) kmem frees the new object (the 161b5fca8f8Stomee * unused copy destination) and searches for the old object in the 162b5fca8f8Stomee * magazine layer. If found, the object is removed from the magazine 163b5fca8f8Stomee * layer and freed to the slab layer so it will no longer hold the 164b5fca8f8Stomee * slab hostage. 165b5fca8f8Stomee * 166b5fca8f8Stomee * 2.3 Object States 167b5fca8f8Stomee * 168b5fca8f8Stomee * Neither kmem nor the client can be assumed to know the object's whereabouts 169b5fca8f8Stomee * at the time of the callback. An object belonging to a kmem cache may be in 170b5fca8f8Stomee * any of the following states: 171b5fca8f8Stomee * 172b5fca8f8Stomee * 1. Uninitialized on the slab 173b5fca8f8Stomee * 2. Allocated from the slab but not constructed (still uninitialized) 174b5fca8f8Stomee * 3. Allocated from the slab, constructed, but not yet ready for business 175b5fca8f8Stomee * (not in a valid state for the move callback) 176b5fca8f8Stomee * 4. In use (valid and known to the client) 177b5fca8f8Stomee * 5. About to be freed (no longer in a valid state for the move callback) 178b5fca8f8Stomee * 6. Freed to a magazine (still constructed) 179b5fca8f8Stomee * 7. Allocated from a magazine, not yet ready for business (not in a valid 180b5fca8f8Stomee * state for the move callback), and about to return to state #4 181b5fca8f8Stomee * 8. Deconstructed on a magazine that is about to be freed 182b5fca8f8Stomee * 9. Freed to the slab 183b5fca8f8Stomee * 184b5fca8f8Stomee * Since the move callback may be called at any time while the object is in any 185b5fca8f8Stomee * of the above states (except state #1), the client needs a safe way to 186b5fca8f8Stomee * determine whether or not it knows about the object. Specifically, the client 187b5fca8f8Stomee * needs to know whether or not the object is in state #4, the only state in 188b5fca8f8Stomee * which a move is valid. If the object is in any other state, the client should 189b5fca8f8Stomee * immediately return KMEM_CBRC_DONT_KNOW, since it is unsafe to access any of 190b5fca8f8Stomee * the object's fields. 191b5fca8f8Stomee * 192b5fca8f8Stomee * Note that although an object may be in state #4 when kmem initiates the move 193b5fca8f8Stomee * request, the object may no longer be in that state by the time kmem actually 194b5fca8f8Stomee * calls the move function. Not only does the client free objects 195b5fca8f8Stomee * asynchronously, kmem itself puts move requests on a queue where thay are 196b5fca8f8Stomee * pending until kmem processes them from another context. Also, objects freed 197b5fca8f8Stomee * to a magazine appear allocated from the point of view of the slab layer, so 198b5fca8f8Stomee * kmem may even initiate requests for objects in a state other than state #4. 199b5fca8f8Stomee * 200b5fca8f8Stomee * 2.3.1 Magazine Layer 201b5fca8f8Stomee * 202b5fca8f8Stomee * An important insight revealed by the states listed above is that the magazine 203b5fca8f8Stomee * layer is populated only by kmem_cache_free(). Magazines of constructed 204b5fca8f8Stomee * objects are never populated directly from the slab layer (which contains raw, 205b5fca8f8Stomee * unconstructed objects). Whenever an allocation request cannot be satisfied 206b5fca8f8Stomee * from the magazine layer, the magazines are bypassed and the request is 207b5fca8f8Stomee * satisfied from the slab layer (creating a new slab if necessary). kmem calls 208b5fca8f8Stomee * the object constructor only when allocating from the slab layer, and only in 209b5fca8f8Stomee * response to kmem_cache_alloc() or to prepare the destination buffer passed in 210b5fca8f8Stomee * the move callback. kmem does not preconstruct objects in anticipation of 211b5fca8f8Stomee * kmem_cache_alloc(). 212b5fca8f8Stomee * 213b5fca8f8Stomee * 2.3.2 Object Constructor and Destructor 214b5fca8f8Stomee * 215b5fca8f8Stomee * If the client supplies a destructor, it must be valid to call the destructor 216b5fca8f8Stomee * on a newly created object (immediately after the constructor). 217b5fca8f8Stomee * 218b5fca8f8Stomee * 2.4 Recognizing Known Objects 219b5fca8f8Stomee * 220b5fca8f8Stomee * There is a simple test to determine safely whether or not the client knows 221b5fca8f8Stomee * about a given object in the move callback. It relies on the fact that kmem 222b5fca8f8Stomee * guarantees that the object of the move callback has only been touched by the 223b5fca8f8Stomee * client itself or else by kmem. kmem does this by ensuring that none of the 224b5fca8f8Stomee * cache's slabs are freed to the virtual memory (VM) subsystem while a move 225b5fca8f8Stomee * callback is pending. When the last object on a slab is freed, if there is a 226b5fca8f8Stomee * pending move, kmem puts the slab on a per-cache dead list and defers freeing 227b5fca8f8Stomee * slabs on that list until all pending callbacks are completed. That way, 228b5fca8f8Stomee * clients can be certain that the object of a move callback is in one of the 229b5fca8f8Stomee * states listed above, making it possible to distinguish known objects (in 230b5fca8f8Stomee * state #4) using the two low order bits of any pointer member (with the 231b5fca8f8Stomee * exception of 'char *' or 'short *' which may not be 4-byte aligned on some 232b5fca8f8Stomee * platforms). 233b5fca8f8Stomee * 234b5fca8f8Stomee * The test works as long as the client always transitions objects from state #4 235b5fca8f8Stomee * (known, in use) to state #5 (about to be freed, invalid) by setting the low 236b5fca8f8Stomee * order bit of the client-designated pointer member. Since kmem only writes 237b5fca8f8Stomee * invalid memory patterns, such as 0xbaddcafe to uninitialized memory and 238b5fca8f8Stomee * 0xdeadbeef to freed memory, any scribbling on the object done by kmem is 239b5fca8f8Stomee * guaranteed to set at least one of the two low order bits. Therefore, given an 240b5fca8f8Stomee * object with a back pointer to a 'container_t *o_container', the client can 241b5fca8f8Stomee * test 242b5fca8f8Stomee * 243b5fca8f8Stomee * container_t *container = object->o_container; 244b5fca8f8Stomee * if ((uintptr_t)container & 0x3) { 245b5fca8f8Stomee * return (KMEM_CBRC_DONT_KNOW); 246b5fca8f8Stomee * } 247b5fca8f8Stomee * 248b5fca8f8Stomee * Typically, an object will have a pointer to some structure with a list or 249b5fca8f8Stomee * hash where objects from the cache are kept while in use. Assuming that the 250b5fca8f8Stomee * client has some way of knowing that the container structure is valid and will 251b5fca8f8Stomee * not go away during the move, and assuming that the structure includes a lock 252b5fca8f8Stomee * to protect whatever collection is used, then the client would continue as 253b5fca8f8Stomee * follows: 254b5fca8f8Stomee * 255b5fca8f8Stomee * // Ensure that the container structure does not go away. 256b5fca8f8Stomee * if (container_hold(container) == 0) { 257b5fca8f8Stomee * return (KMEM_CBRC_DONT_KNOW); 258b5fca8f8Stomee * } 259b5fca8f8Stomee * mutex_enter(&container->c_objects_lock); 260b5fca8f8Stomee * if (container != object->o_container) { 261b5fca8f8Stomee * mutex_exit(&container->c_objects_lock); 262b5fca8f8Stomee * container_rele(container); 263b5fca8f8Stomee * return (KMEM_CBRC_DONT_KNOW); 264b5fca8f8Stomee * } 265b5fca8f8Stomee * 266b5fca8f8Stomee * At this point the client knows that the object cannot be freed as long as 267b5fca8f8Stomee * c_objects_lock is held. Note that after acquiring the lock, the client must 268b5fca8f8Stomee * recheck the o_container pointer in case the object was removed just before 269b5fca8f8Stomee * acquiring the lock. 270b5fca8f8Stomee * 271b5fca8f8Stomee * When the client is about to free an object, it must first remove that object 272b5fca8f8Stomee * from the list, hash, or other structure where it is kept. At that time, to 273b5fca8f8Stomee * mark the object so it can be distinguished from the remaining, known objects, 274b5fca8f8Stomee * the client sets the designated low order bit: 275b5fca8f8Stomee * 276b5fca8f8Stomee * mutex_enter(&container->c_objects_lock); 277b5fca8f8Stomee * object->o_container = (void *)((uintptr_t)object->o_container | 0x1); 278b5fca8f8Stomee * list_remove(&container->c_objects, object); 279b5fca8f8Stomee * mutex_exit(&container->c_objects_lock); 280b5fca8f8Stomee * 281b5fca8f8Stomee * In the common case, the object is freed to the magazine layer, where it may 282b5fca8f8Stomee * be reused on a subsequent allocation without the overhead of calling the 283b5fca8f8Stomee * constructor. While in the magazine it appears allocated from the point of 284b5fca8f8Stomee * view of the slab layer, making it a candidate for the move callback. Most 285b5fca8f8Stomee * objects unrecognized by the client in the move callback fall into this 286b5fca8f8Stomee * category and are cheaply distinguished from known objects by the test 287b5fca8f8Stomee * described earlier. Since recognition is cheap for the client, and searching 288b5fca8f8Stomee * magazines is expensive for kmem, kmem defers searching until the client first 289b5fca8f8Stomee * returns KMEM_CBRC_DONT_KNOW. As long as the needed effort is reasonable, kmem 290b5fca8f8Stomee * elsewhere does what it can to avoid bothering the client unnecessarily. 291b5fca8f8Stomee * 292b5fca8f8Stomee * Invalidating the designated pointer member before freeing the object marks 293b5fca8f8Stomee * the object to be avoided in the callback, and conversely, assigning a valid 294b5fca8f8Stomee * value to the designated pointer member after allocating the object makes the 295b5fca8f8Stomee * object fair game for the callback: 296b5fca8f8Stomee * 297b5fca8f8Stomee * ... allocate object ... 298b5fca8f8Stomee * ... set any initial state not set by the constructor ... 299b5fca8f8Stomee * 300b5fca8f8Stomee * mutex_enter(&container->c_objects_lock); 301b5fca8f8Stomee * list_insert_tail(&container->c_objects, object); 302b5fca8f8Stomee * membar_producer(); 303b5fca8f8Stomee * object->o_container = container; 304b5fca8f8Stomee * mutex_exit(&container->c_objects_lock); 305b5fca8f8Stomee * 306b5fca8f8Stomee * Note that everything else must be valid before setting o_container makes the 307b5fca8f8Stomee * object fair game for the move callback. The membar_producer() call ensures 308b5fca8f8Stomee * that all the object's state is written to memory before setting the pointer 309b5fca8f8Stomee * that transitions the object from state #3 or #7 (allocated, constructed, not 310b5fca8f8Stomee * yet in use) to state #4 (in use, valid). That's important because the move 311b5fca8f8Stomee * function has to check the validity of the pointer before it can safely 312b5fca8f8Stomee * acquire the lock protecting the collection where it expects to find known 313b5fca8f8Stomee * objects. 314b5fca8f8Stomee * 315b5fca8f8Stomee * This method of distinguishing known objects observes the usual symmetry: 316b5fca8f8Stomee * invalidating the designated pointer is the first thing the client does before 317b5fca8f8Stomee * freeing the object, and setting the designated pointer is the last thing the 318b5fca8f8Stomee * client does after allocating the object. Of course, the client is not 319b5fca8f8Stomee * required to use this method. Fundamentally, how the client recognizes known 320b5fca8f8Stomee * objects is completely up to the client, but this method is recommended as an 321b5fca8f8Stomee * efficient and safe way to take advantage of the guarantees made by kmem. If 322b5fca8f8Stomee * the entire object is arbitrary data without any markable bits from a suitable 323b5fca8f8Stomee * pointer member, then the client must find some other method, such as 324b5fca8f8Stomee * searching a hash table of known objects. 325b5fca8f8Stomee * 326b5fca8f8Stomee * 2.5 Preventing Objects From Moving 327b5fca8f8Stomee * 328b5fca8f8Stomee * Besides a way to distinguish known objects, the other thing that the client 329b5fca8f8Stomee * needs is a strategy to ensure that an object will not move while the client 330b5fca8f8Stomee * is actively using it. The details of satisfying this requirement tend to be 331b5fca8f8Stomee * highly cache-specific. It might seem that the same rules that let a client 332b5fca8f8Stomee * remove an object safely should also decide when an object can be moved 333b5fca8f8Stomee * safely. However, any object state that makes a removal attempt invalid is 334b5fca8f8Stomee * likely to be long-lasting for objects that the client does not expect to 335b5fca8f8Stomee * remove. kmem knows nothing about the object state and is equally likely (from 336b5fca8f8Stomee * the client's point of view) to request a move for any object in the cache, 337b5fca8f8Stomee * whether prepared for removal or not. Even a low percentage of objects stuck 338b5fca8f8Stomee * in place by unremovability will defeat the consolidator if the stuck objects 339b5fca8f8Stomee * are the same long-lived allocations likely to hold slabs hostage. 340b5fca8f8Stomee * Fundamentally, the consolidator is not aimed at common cases. Severe external 341b5fca8f8Stomee * fragmentation is a worst case scenario manifested as sparsely allocated 342b5fca8f8Stomee * slabs, by definition a low percentage of the cache's objects. When deciding 343b5fca8f8Stomee * what makes an object movable, keep in mind the goal of the consolidator: to 344b5fca8f8Stomee * bring worst-case external fragmentation within the limits guaranteed for 345b5fca8f8Stomee * internal fragmentation. Removability is a poor criterion if it is likely to 346b5fca8f8Stomee * exclude more than an insignificant percentage of objects for long periods of 347b5fca8f8Stomee * time. 348b5fca8f8Stomee * 349b5fca8f8Stomee * A tricky general solution exists, and it has the advantage of letting you 350b5fca8f8Stomee * move any object at almost any moment, practically eliminating the likelihood 351b5fca8f8Stomee * that an object can hold a slab hostage. However, if there is a cache-specific 352b5fca8f8Stomee * way to ensure that an object is not actively in use in the vast majority of 353b5fca8f8Stomee * cases, a simpler solution that leverages this cache-specific knowledge is 354b5fca8f8Stomee * preferred. 355b5fca8f8Stomee * 356b5fca8f8Stomee * 2.5.1 Cache-Specific Solution 357b5fca8f8Stomee * 358b5fca8f8Stomee * As an example of a cache-specific solution, the ZFS znode cache takes 359b5fca8f8Stomee * advantage of the fact that the vast majority of znodes are only being 360b5fca8f8Stomee * referenced from the DNLC. (A typical case might be a few hundred in active 361b5fca8f8Stomee * use and a hundred thousand in the DNLC.) In the move callback, after the ZFS 362b5fca8f8Stomee * client has established that it recognizes the znode and can access its fields 363b5fca8f8Stomee * safely (using the method described earlier), it then tests whether the znode 364b5fca8f8Stomee * is referenced by anything other than the DNLC. If so, it assumes that the 365b5fca8f8Stomee * znode may be in active use and is unsafe to move, so it drops its locks and 366b5fca8f8Stomee * returns KMEM_CBRC_LATER. The advantage of this strategy is that everywhere 367b5fca8f8Stomee * else znodes are used, no change is needed to protect against the possibility 368b5fca8f8Stomee * of the znode moving. The disadvantage is that it remains possible for an 369b5fca8f8Stomee * application to hold a znode slab hostage with an open file descriptor. 370b5fca8f8Stomee * However, this case ought to be rare and the consolidator has a way to deal 371b5fca8f8Stomee * with it: If the client responds KMEM_CBRC_LATER repeatedly for the same 372b5fca8f8Stomee * object, kmem eventually stops believing it and treats the slab as if the 373b5fca8f8Stomee * client had responded KMEM_CBRC_NO. Having marked the hostage slab, kmem can 374b5fca8f8Stomee * then focus on getting it off of the partial slab list by allocating rather 375b5fca8f8Stomee * than freeing all of its objects. (Either way of getting a slab off the 376b5fca8f8Stomee * free list reduces fragmentation.) 377b5fca8f8Stomee * 378b5fca8f8Stomee * 2.5.2 General Solution 379b5fca8f8Stomee * 380b5fca8f8Stomee * The general solution, on the other hand, requires an explicit hold everywhere 381b5fca8f8Stomee * the object is used to prevent it from moving. To keep the client locking 382b5fca8f8Stomee * strategy as uncomplicated as possible, kmem guarantees the simplifying 383b5fca8f8Stomee * assumption that move callbacks are sequential, even across multiple caches. 384b5fca8f8Stomee * Internally, a global queue processed by a single thread supports all caches 385b5fca8f8Stomee * implementing the callback function. No matter how many caches supply a move 386b5fca8f8Stomee * function, the consolidator never moves more than one object at a time, so the 387b5fca8f8Stomee * client does not have to worry about tricky lock ordering involving several 388b5fca8f8Stomee * related objects from different kmem caches. 389b5fca8f8Stomee * 390b5fca8f8Stomee * The general solution implements the explicit hold as a read-write lock, which 391b5fca8f8Stomee * allows multiple readers to access an object from the cache simultaneously 392b5fca8f8Stomee * while a single writer is excluded from moving it. A single rwlock for the 393b5fca8f8Stomee * entire cache would lock out all threads from using any of the cache's objects 394b5fca8f8Stomee * even though only a single object is being moved, so to reduce contention, 395b5fca8f8Stomee * the client can fan out the single rwlock into an array of rwlocks hashed by 396b5fca8f8Stomee * the object address, making it probable that moving one object will not 397b5fca8f8Stomee * prevent other threads from using a different object. The rwlock cannot be a 398b5fca8f8Stomee * member of the object itself, because the possibility of the object moving 399b5fca8f8Stomee * makes it unsafe to access any of the object's fields until the lock is 400b5fca8f8Stomee * acquired. 401b5fca8f8Stomee * 402b5fca8f8Stomee * Assuming a small, fixed number of locks, it's possible that multiple objects 403b5fca8f8Stomee * will hash to the same lock. A thread that needs to use multiple objects in 404b5fca8f8Stomee * the same function may acquire the same lock multiple times. Since rwlocks are 405b5fca8f8Stomee * reentrant for readers, and since there is never more than a single writer at 406b5fca8f8Stomee * a time (assuming that the client acquires the lock as a writer only when 407b5fca8f8Stomee * moving an object inside the callback), there would seem to be no problem. 408b5fca8f8Stomee * However, a client locking multiple objects in the same function must handle 409b5fca8f8Stomee * one case of potential deadlock: Assume that thread A needs to prevent both 410b5fca8f8Stomee * object 1 and object 2 from moving, and thread B, the callback, meanwhile 411b5fca8f8Stomee * tries to move object 3. It's possible, if objects 1, 2, and 3 all hash to the 412b5fca8f8Stomee * same lock, that thread A will acquire the lock for object 1 as a reader 413b5fca8f8Stomee * before thread B sets the lock's write-wanted bit, preventing thread A from 414b5fca8f8Stomee * reacquiring the lock for object 2 as a reader. Unable to make forward 415b5fca8f8Stomee * progress, thread A will never release the lock for object 1, resulting in 416b5fca8f8Stomee * deadlock. 417b5fca8f8Stomee * 418b5fca8f8Stomee * There are two ways of avoiding the deadlock just described. The first is to 419b5fca8f8Stomee * use rw_tryenter() rather than rw_enter() in the callback function when 420b5fca8f8Stomee * attempting to acquire the lock as a writer. If tryenter discovers that the 421b5fca8f8Stomee * same object (or another object hashed to the same lock) is already in use, it 422b5fca8f8Stomee * aborts the callback and returns KMEM_CBRC_LATER. The second way is to use 423b5fca8f8Stomee * rprwlock_t (declared in common/fs/zfs/sys/rprwlock.h) instead of rwlock_t, 424b5fca8f8Stomee * since it allows a thread to acquire the lock as a reader in spite of a 425b5fca8f8Stomee * waiting writer. This second approach insists on moving the object now, no 426b5fca8f8Stomee * matter how many readers the move function must wait for in order to do so, 427b5fca8f8Stomee * and could delay the completion of the callback indefinitely (blocking 428b5fca8f8Stomee * callbacks to other clients). In practice, a less insistent callback using 429b5fca8f8Stomee * rw_tryenter() returns KMEM_CBRC_LATER infrequently enough that there seems 430b5fca8f8Stomee * little reason to use anything else. 431b5fca8f8Stomee * 432b5fca8f8Stomee * Avoiding deadlock is not the only problem that an implementation using an 433b5fca8f8Stomee * explicit hold needs to solve. Locking the object in the first place (to 434b5fca8f8Stomee * prevent it from moving) remains a problem, since the object could move 435b5fca8f8Stomee * between the time you obtain a pointer to the object and the time you acquire 436b5fca8f8Stomee * the rwlock hashed to that pointer value. Therefore the client needs to 437b5fca8f8Stomee * recheck the value of the pointer after acquiring the lock, drop the lock if 438b5fca8f8Stomee * the value has changed, and try again. This requires a level of indirection: 439b5fca8f8Stomee * something that points to the object rather than the object itself, that the 440b5fca8f8Stomee * client can access safely while attempting to acquire the lock. (The object 441b5fca8f8Stomee * itself cannot be referenced safely because it can move at any time.) 442b5fca8f8Stomee * The following lock-acquisition function takes whatever is safe to reference 443b5fca8f8Stomee * (arg), follows its pointer to the object (using function f), and tries as 444b5fca8f8Stomee * often as necessary to acquire the hashed lock and verify that the object 445b5fca8f8Stomee * still has not moved: 446b5fca8f8Stomee * 447b5fca8f8Stomee * object_t * 448b5fca8f8Stomee * object_hold(object_f f, void *arg) 449b5fca8f8Stomee * { 450b5fca8f8Stomee * object_t *op; 451b5fca8f8Stomee * 452b5fca8f8Stomee * op = f(arg); 453b5fca8f8Stomee * if (op == NULL) { 454b5fca8f8Stomee * return (NULL); 455b5fca8f8Stomee * } 456b5fca8f8Stomee * 457b5fca8f8Stomee * rw_enter(OBJECT_RWLOCK(op), RW_READER); 458b5fca8f8Stomee * while (op != f(arg)) { 459b5fca8f8Stomee * rw_exit(OBJECT_RWLOCK(op)); 460b5fca8f8Stomee * op = f(arg); 461b5fca8f8Stomee * if (op == NULL) { 462b5fca8f8Stomee * break; 463b5fca8f8Stomee * } 464b5fca8f8Stomee * rw_enter(OBJECT_RWLOCK(op), RW_READER); 465b5fca8f8Stomee * } 466b5fca8f8Stomee * 467b5fca8f8Stomee * return (op); 468b5fca8f8Stomee * } 469b5fca8f8Stomee * 470b5fca8f8Stomee * The OBJECT_RWLOCK macro hashes the object address to obtain the rwlock. The 471b5fca8f8Stomee * lock reacquisition loop, while necessary, almost never executes. The function 472b5fca8f8Stomee * pointer f (used to obtain the object pointer from arg) has the following type 473b5fca8f8Stomee * definition: 474b5fca8f8Stomee * 475b5fca8f8Stomee * typedef object_t *(*object_f)(void *arg); 476b5fca8f8Stomee * 477b5fca8f8Stomee * An object_f implementation is likely to be as simple as accessing a structure 478b5fca8f8Stomee * member: 479b5fca8f8Stomee * 480b5fca8f8Stomee * object_t * 481b5fca8f8Stomee * s_object(void *arg) 482b5fca8f8Stomee * { 483b5fca8f8Stomee * something_t *sp = arg; 484b5fca8f8Stomee * return (sp->s_object); 485b5fca8f8Stomee * } 486b5fca8f8Stomee * 487b5fca8f8Stomee * The flexibility of a function pointer allows the path to the object to be 488b5fca8f8Stomee * arbitrarily complex and also supports the notion that depending on where you 489b5fca8f8Stomee * are using the object, you may need to get it from someplace different. 490b5fca8f8Stomee * 491b5fca8f8Stomee * The function that releases the explicit hold is simpler because it does not 492b5fca8f8Stomee * have to worry about the object moving: 493b5fca8f8Stomee * 494b5fca8f8Stomee * void 495b5fca8f8Stomee * object_rele(object_t *op) 496b5fca8f8Stomee * { 497b5fca8f8Stomee * rw_exit(OBJECT_RWLOCK(op)); 498b5fca8f8Stomee * } 499b5fca8f8Stomee * 500b5fca8f8Stomee * The caller is spared these details so that obtaining and releasing an 501b5fca8f8Stomee * explicit hold feels like a simple mutex_enter()/mutex_exit() pair. The caller 502b5fca8f8Stomee * of object_hold() only needs to know that the returned object pointer is valid 503b5fca8f8Stomee * if not NULL and that the object will not move until released. 504b5fca8f8Stomee * 505b5fca8f8Stomee * Although object_hold() prevents an object from moving, it does not prevent it 506b5fca8f8Stomee * from being freed. The caller must take measures before calling object_hold() 507b5fca8f8Stomee * (afterwards is too late) to ensure that the held object cannot be freed. The 508b5fca8f8Stomee * caller must do so without accessing the unsafe object reference, so any lock 509b5fca8f8Stomee * or reference count used to ensure the continued existence of the object must 510b5fca8f8Stomee * live outside the object itself. 511b5fca8f8Stomee * 512b5fca8f8Stomee * Obtaining a new object is a special case where an explicit hold is impossible 513b5fca8f8Stomee * for the caller. Any function that returns a newly allocated object (either as 514b5fca8f8Stomee * a return value, or as an in-out paramter) must return it already held; after 515b5fca8f8Stomee * the caller gets it is too late, since the object cannot be safely accessed 516b5fca8f8Stomee * without the level of indirection described earlier. The following 517b5fca8f8Stomee * object_alloc() example uses the same code shown earlier to transition a new 518b5fca8f8Stomee * object into the state of being recognized (by the client) as a known object. 519b5fca8f8Stomee * The function must acquire the hold (rw_enter) before that state transition 520b5fca8f8Stomee * makes the object movable: 521b5fca8f8Stomee * 522b5fca8f8Stomee * static object_t * 523b5fca8f8Stomee * object_alloc(container_t *container) 524b5fca8f8Stomee * { 5254d4c4c43STom Erickson * object_t *object = kmem_cache_alloc(object_cache, 0); 526b5fca8f8Stomee * ... set any initial state not set by the constructor ... 527b5fca8f8Stomee * rw_enter(OBJECT_RWLOCK(object), RW_READER); 528b5fca8f8Stomee * mutex_enter(&container->c_objects_lock); 529b5fca8f8Stomee * list_insert_tail(&container->c_objects, object); 530b5fca8f8Stomee * membar_producer(); 531b5fca8f8Stomee * object->o_container = container; 532b5fca8f8Stomee * mutex_exit(&container->c_objects_lock); 533b5fca8f8Stomee * return (object); 534b5fca8f8Stomee * } 535b5fca8f8Stomee * 536b5fca8f8Stomee * Functions that implicitly acquire an object hold (any function that calls 537b5fca8f8Stomee * object_alloc() to supply an object for the caller) need to be carefully noted 538b5fca8f8Stomee * so that the matching object_rele() is not neglected. Otherwise, leaked holds 539b5fca8f8Stomee * prevent all objects hashed to the affected rwlocks from ever being moved. 540b5fca8f8Stomee * 541b5fca8f8Stomee * The pointer to a held object can be hashed to the holding rwlock even after 542b5fca8f8Stomee * the object has been freed. Although it is possible to release the hold 543b5fca8f8Stomee * after freeing the object, you may decide to release the hold implicitly in 544b5fca8f8Stomee * whatever function frees the object, so as to release the hold as soon as 545b5fca8f8Stomee * possible, and for the sake of symmetry with the function that implicitly 546b5fca8f8Stomee * acquires the hold when it allocates the object. Here, object_free() releases 547b5fca8f8Stomee * the hold acquired by object_alloc(). Its implicit object_rele() forms a 548b5fca8f8Stomee * matching pair with object_hold(): 549b5fca8f8Stomee * 550b5fca8f8Stomee * void 551b5fca8f8Stomee * object_free(object_t *object) 552b5fca8f8Stomee * { 553b5fca8f8Stomee * container_t *container; 554b5fca8f8Stomee * 555b5fca8f8Stomee * ASSERT(object_held(object)); 556b5fca8f8Stomee * container = object->o_container; 557b5fca8f8Stomee * mutex_enter(&container->c_objects_lock); 558b5fca8f8Stomee * object->o_container = 559b5fca8f8Stomee * (void *)((uintptr_t)object->o_container | 0x1); 560b5fca8f8Stomee * list_remove(&container->c_objects, object); 561b5fca8f8Stomee * mutex_exit(&container->c_objects_lock); 562b5fca8f8Stomee * object_rele(object); 563b5fca8f8Stomee * kmem_cache_free(object_cache, object); 564b5fca8f8Stomee * } 565b5fca8f8Stomee * 566b5fca8f8Stomee * Note that object_free() cannot safely accept an object pointer as an argument 567b5fca8f8Stomee * unless the object is already held. Any function that calls object_free() 568b5fca8f8Stomee * needs to be carefully noted since it similarly forms a matching pair with 569b5fca8f8Stomee * object_hold(). 570b5fca8f8Stomee * 571b5fca8f8Stomee * To complete the picture, the following callback function implements the 572b5fca8f8Stomee * general solution by moving objects only if they are currently unheld: 573b5fca8f8Stomee * 574b5fca8f8Stomee * static kmem_cbrc_t 575b5fca8f8Stomee * object_move(void *buf, void *newbuf, size_t size, void *arg) 576b5fca8f8Stomee * { 577b5fca8f8Stomee * object_t *op = buf, *np = newbuf; 578b5fca8f8Stomee * container_t *container; 579b5fca8f8Stomee * 580b5fca8f8Stomee * container = op->o_container; 581b5fca8f8Stomee * if ((uintptr_t)container & 0x3) { 582b5fca8f8Stomee * return (KMEM_CBRC_DONT_KNOW); 583b5fca8f8Stomee * } 584b5fca8f8Stomee * 585b5fca8f8Stomee * // Ensure that the container structure does not go away. 586b5fca8f8Stomee * if (container_hold(container) == 0) { 587b5fca8f8Stomee * return (KMEM_CBRC_DONT_KNOW); 588b5fca8f8Stomee * } 589b5fca8f8Stomee * 590b5fca8f8Stomee * mutex_enter(&container->c_objects_lock); 591b5fca8f8Stomee * if (container != op->o_container) { 592b5fca8f8Stomee * mutex_exit(&container->c_objects_lock); 593b5fca8f8Stomee * container_rele(container); 594b5fca8f8Stomee * return (KMEM_CBRC_DONT_KNOW); 595b5fca8f8Stomee * } 596b5fca8f8Stomee * 597b5fca8f8Stomee * if (rw_tryenter(OBJECT_RWLOCK(op), RW_WRITER) == 0) { 598b5fca8f8Stomee * mutex_exit(&container->c_objects_lock); 599b5fca8f8Stomee * container_rele(container); 600b5fca8f8Stomee * return (KMEM_CBRC_LATER); 601b5fca8f8Stomee * } 602b5fca8f8Stomee * 603b5fca8f8Stomee * object_move_impl(op, np); // critical section 604b5fca8f8Stomee * rw_exit(OBJECT_RWLOCK(op)); 605b5fca8f8Stomee * 606b5fca8f8Stomee * op->o_container = (void *)((uintptr_t)op->o_container | 0x1); 607b5fca8f8Stomee * list_link_replace(&op->o_link_node, &np->o_link_node); 608b5fca8f8Stomee * mutex_exit(&container->c_objects_lock); 609b5fca8f8Stomee * container_rele(container); 610b5fca8f8Stomee * return (KMEM_CBRC_YES); 611b5fca8f8Stomee * } 612b5fca8f8Stomee * 613b5fca8f8Stomee * Note that object_move() must invalidate the designated o_container pointer of 614b5fca8f8Stomee * the old object in the same way that object_free() does, since kmem will free 615b5fca8f8Stomee * the object in response to the KMEM_CBRC_YES return value. 616b5fca8f8Stomee * 617b5fca8f8Stomee * The lock order in object_move() differs from object_alloc(), which locks 618b5fca8f8Stomee * OBJECT_RWLOCK first and &container->c_objects_lock second, but as long as the 619b5fca8f8Stomee * callback uses rw_tryenter() (preventing the deadlock described earlier), it's 620b5fca8f8Stomee * not a problem. Holding the lock on the object list in the example above 621b5fca8f8Stomee * through the entire callback not only prevents the object from going away, it 622b5fca8f8Stomee * also allows you to lock the list elsewhere and know that none of its elements 623b5fca8f8Stomee * will move during iteration. 624b5fca8f8Stomee * 625b5fca8f8Stomee * Adding an explicit hold everywhere an object from the cache is used is tricky 626b5fca8f8Stomee * and involves much more change to client code than a cache-specific solution 627b5fca8f8Stomee * that leverages existing state to decide whether or not an object is 628b5fca8f8Stomee * movable. However, this approach has the advantage that no object remains 629b5fca8f8Stomee * immovable for any significant length of time, making it extremely unlikely 630b5fca8f8Stomee * that long-lived allocations can continue holding slabs hostage; and it works 631b5fca8f8Stomee * for any cache. 632b5fca8f8Stomee * 633b5fca8f8Stomee * 3. Consolidator Implementation 634b5fca8f8Stomee * 635b5fca8f8Stomee * Once the client supplies a move function that a) recognizes known objects and 636b5fca8f8Stomee * b) avoids moving objects that are actively in use, the remaining work is up 637b5fca8f8Stomee * to the consolidator to decide which objects to move and when to issue 638b5fca8f8Stomee * callbacks. 639b5fca8f8Stomee * 640b5fca8f8Stomee * The consolidator relies on the fact that a cache's slabs are ordered by 641b5fca8f8Stomee * usage. Each slab has a fixed number of objects. Depending on the slab's 642b5fca8f8Stomee * "color" (the offset of the first object from the beginning of the slab; 643b5fca8f8Stomee * offsets are staggered to mitigate false sharing of cache lines) it is either 644b5fca8f8Stomee * the maximum number of objects per slab determined at cache creation time or 645b5fca8f8Stomee * else the number closest to the maximum that fits within the space remaining 646b5fca8f8Stomee * after the initial offset. A completely allocated slab may contribute some 647b5fca8f8Stomee * internal fragmentation (per-slab overhead) but no external fragmentation, so 648b5fca8f8Stomee * it is of no interest to the consolidator. At the other extreme, slabs whose 649b5fca8f8Stomee * objects have all been freed to the slab are released to the virtual memory 650b5fca8f8Stomee * (VM) subsystem (objects freed to magazines are still allocated as far as the 651b5fca8f8Stomee * slab is concerned). External fragmentation exists when there are slabs 652b5fca8f8Stomee * somewhere between these extremes. A partial slab has at least one but not all 653b5fca8f8Stomee * of its objects allocated. The more partial slabs, and the fewer allocated 654b5fca8f8Stomee * objects on each of them, the higher the fragmentation. Hence the 655b5fca8f8Stomee * consolidator's overall strategy is to reduce the number of partial slabs by 656b5fca8f8Stomee * moving allocated objects from the least allocated slabs to the most allocated 657b5fca8f8Stomee * slabs. 658b5fca8f8Stomee * 659b5fca8f8Stomee * Partial slabs are kept in an AVL tree ordered by usage. Completely allocated 660b5fca8f8Stomee * slabs are kept separately in an unordered list. Since the majority of slabs 661b5fca8f8Stomee * tend to be completely allocated (a typical unfragmented cache may have 662b5fca8f8Stomee * thousands of complete slabs and only a single partial slab), separating 663b5fca8f8Stomee * complete slabs improves the efficiency of partial slab ordering, since the 664b5fca8f8Stomee * complete slabs do not affect the depth or balance of the AVL tree. This 665b5fca8f8Stomee * ordered sequence of partial slabs acts as a "free list" supplying objects for 666b5fca8f8Stomee * allocation requests. 667b5fca8f8Stomee * 668b5fca8f8Stomee * Objects are always allocated from the first partial slab in the free list, 669b5fca8f8Stomee * where the allocation is most likely to eliminate a partial slab (by 670b5fca8f8Stomee * completely allocating it). Conversely, when a single object from a completely 671b5fca8f8Stomee * allocated slab is freed to the slab, that slab is added to the front of the 672b5fca8f8Stomee * free list. Since most free list activity involves highly allocated slabs 673b5fca8f8Stomee * coming and going at the front of the list, slabs tend naturally toward the 674b5fca8f8Stomee * ideal order: highly allocated at the front, sparsely allocated at the back. 675b5fca8f8Stomee * Slabs with few allocated objects are likely to become completely free if they 676b5fca8f8Stomee * keep a safe distance away from the front of the free list. Slab misorders 677b5fca8f8Stomee * interfere with the natural tendency of slabs to become completely free or 678b5fca8f8Stomee * completely allocated. For example, a slab with a single allocated object 679b5fca8f8Stomee * needs only a single free to escape the cache; its natural desire is 680b5fca8f8Stomee * frustrated when it finds itself at the front of the list where a second 681b5fca8f8Stomee * allocation happens just before the free could have released it. Another slab 682b5fca8f8Stomee * with all but one object allocated might have supplied the buffer instead, so 683b5fca8f8Stomee * that both (as opposed to neither) of the slabs would have been taken off the 684b5fca8f8Stomee * free list. 685b5fca8f8Stomee * 686b5fca8f8Stomee * Although slabs tend naturally toward the ideal order, misorders allowed by a 687b5fca8f8Stomee * simple list implementation defeat the consolidator's strategy of merging 688b5fca8f8Stomee * least- and most-allocated slabs. Without an AVL tree to guarantee order, kmem 689b5fca8f8Stomee * needs another way to fix misorders to optimize its callback strategy. One 690b5fca8f8Stomee * approach is to periodically scan a limited number of slabs, advancing a 691b5fca8f8Stomee * marker to hold the current scan position, and to move extreme misorders to 692b5fca8f8Stomee * the front or back of the free list and to the front or back of the current 693b5fca8f8Stomee * scan range. By making consecutive scan ranges overlap by one slab, the least 694b5fca8f8Stomee * allocated slab in the current range can be carried along from the end of one 695b5fca8f8Stomee * scan to the start of the next. 696b5fca8f8Stomee * 697b5fca8f8Stomee * Maintaining partial slabs in an AVL tree relieves kmem of this additional 698b5fca8f8Stomee * task, however. Since most of the cache's activity is in the magazine layer, 699b5fca8f8Stomee * and allocations from the slab layer represent only a startup cost, the 700b5fca8f8Stomee * overhead of maintaining a balanced tree is not a significant concern compared 701b5fca8f8Stomee * to the opportunity of reducing complexity by eliminating the partial slab 702b5fca8f8Stomee * scanner just described. The overhead of an AVL tree is minimized by 703b5fca8f8Stomee * maintaining only partial slabs in the tree and keeping completely allocated 704b5fca8f8Stomee * slabs separately in a list. To avoid increasing the size of the slab 705b5fca8f8Stomee * structure the AVL linkage pointers are reused for the slab's list linkage, 706b5fca8f8Stomee * since the slab will always be either partial or complete, never stored both 707b5fca8f8Stomee * ways at the same time. To further minimize the overhead of the AVL tree the 708b5fca8f8Stomee * compare function that orders partial slabs by usage divides the range of 709b5fca8f8Stomee * allocated object counts into bins such that counts within the same bin are 710b5fca8f8Stomee * considered equal. Binning partial slabs makes it less likely that allocating 711b5fca8f8Stomee * or freeing a single object will change the slab's order, requiring a tree 712b5fca8f8Stomee * reinsertion (an avl_remove() followed by an avl_add(), both potentially 713b5fca8f8Stomee * requiring some rebalancing of the tree). Allocation counts closest to 714b5fca8f8Stomee * completely free and completely allocated are left unbinned (finely sorted) to 715b5fca8f8Stomee * better support the consolidator's strategy of merging slabs at either 716b5fca8f8Stomee * extreme. 717b5fca8f8Stomee * 718b5fca8f8Stomee * 3.1 Assessing Fragmentation and Selecting Candidate Slabs 719b5fca8f8Stomee * 720b5fca8f8Stomee * The consolidator piggybacks on the kmem maintenance thread and is called on 721b5fca8f8Stomee * the same interval as kmem_cache_update(), once per cache every fifteen 722b5fca8f8Stomee * seconds. kmem maintains a running count of unallocated objects in the slab 723b5fca8f8Stomee * layer (cache_bufslab). The consolidator checks whether that number exceeds 724b5fca8f8Stomee * 12.5% (1/8) of the total objects in the cache (cache_buftotal), and whether 725b5fca8f8Stomee * there is a significant number of slabs in the cache (arbitrarily a minimum 726b5fca8f8Stomee * 101 total slabs). Unused objects that have fallen out of the magazine layer's 727b5fca8f8Stomee * working set are included in the assessment, and magazines in the depot are 728b5fca8f8Stomee * reaped if those objects would lift cache_bufslab above the fragmentation 729b5fca8f8Stomee * threshold. Once the consolidator decides that a cache is fragmented, it looks 730b5fca8f8Stomee * for a candidate slab to reclaim, starting at the end of the partial slab free 731b5fca8f8Stomee * list and scanning backwards. At first the consolidator is choosy: only a slab 732b5fca8f8Stomee * with fewer than 12.5% (1/8) of its objects allocated qualifies (or else a 733b5fca8f8Stomee * single allocated object, regardless of percentage). If there is difficulty 734b5fca8f8Stomee * finding a candidate slab, kmem raises the allocation threshold incrementally, 735b5fca8f8Stomee * up to a maximum 87.5% (7/8), so that eventually the consolidator will reduce 736b5fca8f8Stomee * external fragmentation (unused objects on the free list) below 12.5% (1/8), 737b5fca8f8Stomee * even in the worst case of every slab in the cache being almost 7/8 allocated. 738b5fca8f8Stomee * The threshold can also be lowered incrementally when candidate slabs are easy 739b5fca8f8Stomee * to find, and the threshold is reset to the minimum 1/8 as soon as the cache 740b5fca8f8Stomee * is no longer fragmented. 741b5fca8f8Stomee * 742b5fca8f8Stomee * 3.2 Generating Callbacks 743b5fca8f8Stomee * 744b5fca8f8Stomee * Once an eligible slab is chosen, a callback is generated for every allocated 745b5fca8f8Stomee * object on the slab, in the hope that the client will move everything off the 746b5fca8f8Stomee * slab and make it reclaimable. Objects selected as move destinations are 747b5fca8f8Stomee * chosen from slabs at the front of the free list. Assuming slabs in the ideal 748b5fca8f8Stomee * order (most allocated at the front, least allocated at the back) and a 749b5fca8f8Stomee * cooperative client, the consolidator will succeed in removing slabs from both 750b5fca8f8Stomee * ends of the free list, completely allocating on the one hand and completely 751b5fca8f8Stomee * freeing on the other. Objects selected as move destinations are allocated in 752b5fca8f8Stomee * the kmem maintenance thread where move requests are enqueued. A separate 753b5fca8f8Stomee * callback thread removes pending callbacks from the queue and calls the 754b5fca8f8Stomee * client. The separate thread ensures that client code (the move function) does 755b5fca8f8Stomee * not interfere with internal kmem maintenance tasks. A map of pending 756b5fca8f8Stomee * callbacks keyed by object address (the object to be moved) is checked to 757b5fca8f8Stomee * ensure that duplicate callbacks are not generated for the same object. 758b5fca8f8Stomee * Allocating the move destination (the object to move to) prevents subsequent 759b5fca8f8Stomee * callbacks from selecting the same destination as an earlier pending callback. 760b5fca8f8Stomee * 761b5fca8f8Stomee * Move requests can also be generated by kmem_cache_reap() when the system is 762b5fca8f8Stomee * desperate for memory and by kmem_cache_move_notify(), called by the client to 763b5fca8f8Stomee * notify kmem that a move refused earlier with KMEM_CBRC_LATER is now possible. 764b5fca8f8Stomee * The map of pending callbacks is protected by the same lock that protects the 765b5fca8f8Stomee * slab layer. 766b5fca8f8Stomee * 767b5fca8f8Stomee * When the system is desperate for memory, kmem does not bother to determine 768b5fca8f8Stomee * whether or not the cache exceeds the fragmentation threshold, but tries to 769b5fca8f8Stomee * consolidate as many slabs as possible. Normally, the consolidator chews 770b5fca8f8Stomee * slowly, one sparsely allocated slab at a time during each maintenance 771b5fca8f8Stomee * interval that the cache is fragmented. When desperate, the consolidator 772b5fca8f8Stomee * starts at the last partial slab and enqueues callbacks for every allocated 773b5fca8f8Stomee * object on every partial slab, working backwards until it reaches the first 774b5fca8f8Stomee * partial slab. The first partial slab, meanwhile, advances in pace with the 775b5fca8f8Stomee * consolidator as allocations to supply move destinations for the enqueued 776b5fca8f8Stomee * callbacks use up the highly allocated slabs at the front of the free list. 777b5fca8f8Stomee * Ideally, the overgrown free list collapses like an accordion, starting at 778b5fca8f8Stomee * both ends and ending at the center with a single partial slab. 779b5fca8f8Stomee * 780b5fca8f8Stomee * 3.3 Client Responses 781b5fca8f8Stomee * 782b5fca8f8Stomee * When the client returns KMEM_CBRC_NO in response to the move callback, kmem 783b5fca8f8Stomee * marks the slab that supplied the stuck object non-reclaimable and moves it to 784b5fca8f8Stomee * front of the free list. The slab remains marked as long as it remains on the 785b5fca8f8Stomee * free list, and it appears more allocated to the partial slab compare function 786b5fca8f8Stomee * than any unmarked slab, no matter how many of its objects are allocated. 787b5fca8f8Stomee * Since even one immovable object ties up the entire slab, the goal is to 788b5fca8f8Stomee * completely allocate any slab that cannot be completely freed. kmem does not 789b5fca8f8Stomee * bother generating callbacks to move objects from a marked slab unless the 790b5fca8f8Stomee * system is desperate. 791b5fca8f8Stomee * 792b5fca8f8Stomee * When the client responds KMEM_CBRC_LATER, kmem increments a count for the 793b5fca8f8Stomee * slab. If the client responds LATER too many times, kmem disbelieves and 794b5fca8f8Stomee * treats the response as a NO. The count is cleared when the slab is taken off 795b5fca8f8Stomee * the partial slab list or when the client moves one of the slab's objects. 796b5fca8f8Stomee * 797b5fca8f8Stomee * 4. Observability 798b5fca8f8Stomee * 799b5fca8f8Stomee * A kmem cache's external fragmentation is best observed with 'mdb -k' using 800b5fca8f8Stomee * the ::kmem_slabs dcmd. For a complete description of the command, enter 801b5fca8f8Stomee * '::help kmem_slabs' at the mdb prompt. 8027c478bd9Sstevel@tonic-gate */ 8037c478bd9Sstevel@tonic-gate 8047c478bd9Sstevel@tonic-gate #include <sys/kmem_impl.h> 8057c478bd9Sstevel@tonic-gate #include <sys/vmem_impl.h> 8067c478bd9Sstevel@tonic-gate #include <sys/param.h> 8077c478bd9Sstevel@tonic-gate #include <sys/sysmacros.h> 8087c478bd9Sstevel@tonic-gate #include <sys/vm.h> 8097c478bd9Sstevel@tonic-gate #include <sys/proc.h> 8107c478bd9Sstevel@tonic-gate #include <sys/tuneable.h> 8117c478bd9Sstevel@tonic-gate #include <sys/systm.h> 8127c478bd9Sstevel@tonic-gate #include <sys/cmn_err.h> 8137c478bd9Sstevel@tonic-gate #include <sys/debug.h> 814b5fca8f8Stomee #include <sys/sdt.h> 8157c478bd9Sstevel@tonic-gate #include <sys/mutex.h> 8167c478bd9Sstevel@tonic-gate #include <sys/bitmap.h> 8177c478bd9Sstevel@tonic-gate #include <sys/atomic.h> 8187c478bd9Sstevel@tonic-gate #include <sys/kobj.h> 8197c478bd9Sstevel@tonic-gate #include <sys/disp.h> 8207c478bd9Sstevel@tonic-gate #include <vm/seg_kmem.h> 8217c478bd9Sstevel@tonic-gate #include <sys/log.h> 8227c478bd9Sstevel@tonic-gate #include <sys/callb.h> 8237c478bd9Sstevel@tonic-gate #include <sys/taskq.h> 8247c478bd9Sstevel@tonic-gate #include <sys/modctl.h> 8257c478bd9Sstevel@tonic-gate #include <sys/reboot.h> 8267c478bd9Sstevel@tonic-gate #include <sys/id32.h> 8277c478bd9Sstevel@tonic-gate #include <sys/zone.h> 828f4b3ec61Sdh #include <sys/netstack.h> 829b5fca8f8Stomee #ifdef DEBUG 830b5fca8f8Stomee #include <sys/random.h> 831b5fca8f8Stomee #endif 8327c478bd9Sstevel@tonic-gate 8337c478bd9Sstevel@tonic-gate extern void streams_msg_init(void); 8347c478bd9Sstevel@tonic-gate extern int segkp_fromheap; 8357c478bd9Sstevel@tonic-gate extern void segkp_cache_free(void); 8367c478bd9Sstevel@tonic-gate 8377c478bd9Sstevel@tonic-gate struct kmem_cache_kstat { 8387c478bd9Sstevel@tonic-gate kstat_named_t kmc_buf_size; 8397c478bd9Sstevel@tonic-gate kstat_named_t kmc_align; 8407c478bd9Sstevel@tonic-gate kstat_named_t kmc_chunk_size; 8417c478bd9Sstevel@tonic-gate kstat_named_t kmc_slab_size; 8427c478bd9Sstevel@tonic-gate kstat_named_t kmc_alloc; 8437c478bd9Sstevel@tonic-gate kstat_named_t kmc_alloc_fail; 8447c478bd9Sstevel@tonic-gate kstat_named_t kmc_free; 8457c478bd9Sstevel@tonic-gate kstat_named_t kmc_depot_alloc; 8467c478bd9Sstevel@tonic-gate kstat_named_t kmc_depot_free; 8477c478bd9Sstevel@tonic-gate kstat_named_t kmc_depot_contention; 8487c478bd9Sstevel@tonic-gate kstat_named_t kmc_slab_alloc; 8497c478bd9Sstevel@tonic-gate kstat_named_t kmc_slab_free; 8507c478bd9Sstevel@tonic-gate kstat_named_t kmc_buf_constructed; 8517c478bd9Sstevel@tonic-gate kstat_named_t kmc_buf_avail; 8527c478bd9Sstevel@tonic-gate kstat_named_t kmc_buf_inuse; 8537c478bd9Sstevel@tonic-gate kstat_named_t kmc_buf_total; 8547c478bd9Sstevel@tonic-gate kstat_named_t kmc_buf_max; 8557c478bd9Sstevel@tonic-gate kstat_named_t kmc_slab_create; 8567c478bd9Sstevel@tonic-gate kstat_named_t kmc_slab_destroy; 8577c478bd9Sstevel@tonic-gate kstat_named_t kmc_vmem_source; 8587c478bd9Sstevel@tonic-gate kstat_named_t kmc_hash_size; 8597c478bd9Sstevel@tonic-gate kstat_named_t kmc_hash_lookup_depth; 8607c478bd9Sstevel@tonic-gate kstat_named_t kmc_hash_rescale; 8617c478bd9Sstevel@tonic-gate kstat_named_t kmc_full_magazines; 8627c478bd9Sstevel@tonic-gate kstat_named_t kmc_empty_magazines; 8637c478bd9Sstevel@tonic-gate kstat_named_t kmc_magazine_size; 864*686031edSTom Erickson kstat_named_t kmc_reap; /* number of kmem_cache_reap() calls */ 865*686031edSTom Erickson kstat_named_t kmc_defrag; /* attempts to defrag all partial slabs */ 866*686031edSTom Erickson kstat_named_t kmc_scan; /* attempts to defrag one partial slab */ 867*686031edSTom Erickson kstat_named_t kmc_move_callbacks; /* sum of yes, no, later, dn, dk */ 868b5fca8f8Stomee kstat_named_t kmc_move_yes; 869b5fca8f8Stomee kstat_named_t kmc_move_no; 870b5fca8f8Stomee kstat_named_t kmc_move_later; 871b5fca8f8Stomee kstat_named_t kmc_move_dont_need; 872*686031edSTom Erickson kstat_named_t kmc_move_dont_know; /* obj unrecognized by client ... */ 873*686031edSTom Erickson kstat_named_t kmc_move_hunt_found; /* ... but found in mag layer */ 874*686031edSTom Erickson kstat_named_t kmc_move_slabs_freed; /* slabs freed by consolidator */ 875*686031edSTom Erickson kstat_named_t kmc_move_reclaimable; /* buffers, if consolidator ran */ 8767c478bd9Sstevel@tonic-gate } kmem_cache_kstat = { 8777c478bd9Sstevel@tonic-gate { "buf_size", KSTAT_DATA_UINT64 }, 8787c478bd9Sstevel@tonic-gate { "align", KSTAT_DATA_UINT64 }, 8797c478bd9Sstevel@tonic-gate { "chunk_size", KSTAT_DATA_UINT64 }, 8807c478bd9Sstevel@tonic-gate { "slab_size", KSTAT_DATA_UINT64 }, 8817c478bd9Sstevel@tonic-gate { "alloc", KSTAT_DATA_UINT64 }, 8827c478bd9Sstevel@tonic-gate { "alloc_fail", KSTAT_DATA_UINT64 }, 8837c478bd9Sstevel@tonic-gate { "free", KSTAT_DATA_UINT64 }, 8847c478bd9Sstevel@tonic-gate { "depot_alloc", KSTAT_DATA_UINT64 }, 8857c478bd9Sstevel@tonic-gate { "depot_free", KSTAT_DATA_UINT64 }, 8867c478bd9Sstevel@tonic-gate { "depot_contention", KSTAT_DATA_UINT64 }, 8877c478bd9Sstevel@tonic-gate { "slab_alloc", KSTAT_DATA_UINT64 }, 8887c478bd9Sstevel@tonic-gate { "slab_free", KSTAT_DATA_UINT64 }, 8897c478bd9Sstevel@tonic-gate { "buf_constructed", KSTAT_DATA_UINT64 }, 8907c478bd9Sstevel@tonic-gate { "buf_avail", KSTAT_DATA_UINT64 }, 8917c478bd9Sstevel@tonic-gate { "buf_inuse", KSTAT_DATA_UINT64 }, 8927c478bd9Sstevel@tonic-gate { "buf_total", KSTAT_DATA_UINT64 }, 8937c478bd9Sstevel@tonic-gate { "buf_max", KSTAT_DATA_UINT64 }, 8947c478bd9Sstevel@tonic-gate { "slab_create", KSTAT_DATA_UINT64 }, 8957c478bd9Sstevel@tonic-gate { "slab_destroy", KSTAT_DATA_UINT64 }, 8967c478bd9Sstevel@tonic-gate { "vmem_source", KSTAT_DATA_UINT64 }, 8977c478bd9Sstevel@tonic-gate { "hash_size", KSTAT_DATA_UINT64 }, 8987c478bd9Sstevel@tonic-gate { "hash_lookup_depth", KSTAT_DATA_UINT64 }, 8997c478bd9Sstevel@tonic-gate { "hash_rescale", KSTAT_DATA_UINT64 }, 9007c478bd9Sstevel@tonic-gate { "full_magazines", KSTAT_DATA_UINT64 }, 9017c478bd9Sstevel@tonic-gate { "empty_magazines", KSTAT_DATA_UINT64 }, 9027c478bd9Sstevel@tonic-gate { "magazine_size", KSTAT_DATA_UINT64 }, 903*686031edSTom Erickson { "reap", KSTAT_DATA_UINT64 }, 904*686031edSTom Erickson { "defrag", KSTAT_DATA_UINT64 }, 905*686031edSTom Erickson { "scan", KSTAT_DATA_UINT64 }, 906b5fca8f8Stomee { "move_callbacks", KSTAT_DATA_UINT64 }, 907b5fca8f8Stomee { "move_yes", KSTAT_DATA_UINT64 }, 908b5fca8f8Stomee { "move_no", KSTAT_DATA_UINT64 }, 909b5fca8f8Stomee { "move_later", KSTAT_DATA_UINT64 }, 910b5fca8f8Stomee { "move_dont_need", KSTAT_DATA_UINT64 }, 911b5fca8f8Stomee { "move_dont_know", KSTAT_DATA_UINT64 }, 912b5fca8f8Stomee { "move_hunt_found", KSTAT_DATA_UINT64 }, 913*686031edSTom Erickson { "move_slabs_freed", KSTAT_DATA_UINT64 }, 914*686031edSTom Erickson { "move_reclaimable", KSTAT_DATA_UINT64 }, 9157c478bd9Sstevel@tonic-gate }; 9167c478bd9Sstevel@tonic-gate 9177c478bd9Sstevel@tonic-gate static kmutex_t kmem_cache_kstat_lock; 9187c478bd9Sstevel@tonic-gate 9197c478bd9Sstevel@tonic-gate /* 9207c478bd9Sstevel@tonic-gate * The default set of caches to back kmem_alloc(). 9217c478bd9Sstevel@tonic-gate * These sizes should be reevaluated periodically. 9227c478bd9Sstevel@tonic-gate * 9237c478bd9Sstevel@tonic-gate * We want allocations that are multiples of the coherency granularity 9247c478bd9Sstevel@tonic-gate * (64 bytes) to be satisfied from a cache which is a multiple of 64 9257c478bd9Sstevel@tonic-gate * bytes, so that it will be 64-byte aligned. For all multiples of 64, 9267c478bd9Sstevel@tonic-gate * the next kmem_cache_size greater than or equal to it must be a 9277c478bd9Sstevel@tonic-gate * multiple of 64. 928dce01e3fSJonathan W Adams * 929dce01e3fSJonathan W Adams * We split the table into two sections: size <= 4k and size > 4k. This 930dce01e3fSJonathan W Adams * saves a lot of space and cache footprint in our cache tables. 9317c478bd9Sstevel@tonic-gate */ 9327c478bd9Sstevel@tonic-gate static const int kmem_alloc_sizes[] = { 9337c478bd9Sstevel@tonic-gate 1 * 8, 9347c478bd9Sstevel@tonic-gate 2 * 8, 9357c478bd9Sstevel@tonic-gate 3 * 8, 9367c478bd9Sstevel@tonic-gate 4 * 8, 5 * 8, 6 * 8, 7 * 8, 9377c478bd9Sstevel@tonic-gate 4 * 16, 5 * 16, 6 * 16, 7 * 16, 9387c478bd9Sstevel@tonic-gate 4 * 32, 5 * 32, 6 * 32, 7 * 32, 9397c478bd9Sstevel@tonic-gate 4 * 64, 5 * 64, 6 * 64, 7 * 64, 9407c478bd9Sstevel@tonic-gate 4 * 128, 5 * 128, 6 * 128, 7 * 128, 9417c478bd9Sstevel@tonic-gate P2ALIGN(8192 / 7, 64), 9427c478bd9Sstevel@tonic-gate P2ALIGN(8192 / 6, 64), 9437c478bd9Sstevel@tonic-gate P2ALIGN(8192 / 5, 64), 9447c478bd9Sstevel@tonic-gate P2ALIGN(8192 / 4, 64), 9457c478bd9Sstevel@tonic-gate P2ALIGN(8192 / 3, 64), 9467c478bd9Sstevel@tonic-gate P2ALIGN(8192 / 2, 64), 9477c478bd9Sstevel@tonic-gate }; 9487c478bd9Sstevel@tonic-gate 949dce01e3fSJonathan W Adams static const int kmem_big_alloc_sizes[] = { 950dce01e3fSJonathan W Adams 2 * 4096, 3 * 4096, 951dce01e3fSJonathan W Adams 2 * 8192, 3 * 8192, 952dce01e3fSJonathan W Adams 4 * 8192, 5 * 8192, 6 * 8192, 7 * 8192, 953dce01e3fSJonathan W Adams 8 * 8192, 9 * 8192, 10 * 8192, 11 * 8192, 954dce01e3fSJonathan W Adams 12 * 8192, 13 * 8192, 14 * 8192, 15 * 8192, 955dce01e3fSJonathan W Adams 16 * 8192 956dce01e3fSJonathan W Adams }; 957dce01e3fSJonathan W Adams 958dce01e3fSJonathan W Adams #define KMEM_MAXBUF 4096 959dce01e3fSJonathan W Adams #define KMEM_BIG_MAXBUF_32BIT 32768 960dce01e3fSJonathan W Adams #define KMEM_BIG_MAXBUF 131072 961dce01e3fSJonathan W Adams 962dce01e3fSJonathan W Adams #define KMEM_BIG_MULTIPLE 4096 /* big_alloc_sizes must be a multiple */ 963dce01e3fSJonathan W Adams #define KMEM_BIG_SHIFT 12 /* lg(KMEM_BIG_MULTIPLE) */ 9647c478bd9Sstevel@tonic-gate 9657c478bd9Sstevel@tonic-gate static kmem_cache_t *kmem_alloc_table[KMEM_MAXBUF >> KMEM_ALIGN_SHIFT]; 966dce01e3fSJonathan W Adams static kmem_cache_t *kmem_big_alloc_table[KMEM_BIG_MAXBUF >> KMEM_BIG_SHIFT]; 967dce01e3fSJonathan W Adams 968dce01e3fSJonathan W Adams #define KMEM_ALLOC_TABLE_MAX (KMEM_MAXBUF >> KMEM_ALIGN_SHIFT) 969dce01e3fSJonathan W Adams static size_t kmem_big_alloc_table_max = 0; /* # of filled elements */ 9707c478bd9Sstevel@tonic-gate 9717c478bd9Sstevel@tonic-gate static kmem_magtype_t kmem_magtype[] = { 9727c478bd9Sstevel@tonic-gate { 1, 8, 3200, 65536 }, 9737c478bd9Sstevel@tonic-gate { 3, 16, 256, 32768 }, 9747c478bd9Sstevel@tonic-gate { 7, 32, 64, 16384 }, 9757c478bd9Sstevel@tonic-gate { 15, 64, 0, 8192 }, 9767c478bd9Sstevel@tonic-gate { 31, 64, 0, 4096 }, 9777c478bd9Sstevel@tonic-gate { 47, 64, 0, 2048 }, 9787c478bd9Sstevel@tonic-gate { 63, 64, 0, 1024 }, 9797c478bd9Sstevel@tonic-gate { 95, 64, 0, 512 }, 9807c478bd9Sstevel@tonic-gate { 143, 64, 0, 0 }, 9817c478bd9Sstevel@tonic-gate }; 9827c478bd9Sstevel@tonic-gate 9837c478bd9Sstevel@tonic-gate static uint32_t kmem_reaping; 9847c478bd9Sstevel@tonic-gate static uint32_t kmem_reaping_idspace; 9857c478bd9Sstevel@tonic-gate 9867c478bd9Sstevel@tonic-gate /* 9877c478bd9Sstevel@tonic-gate * kmem tunables 9887c478bd9Sstevel@tonic-gate */ 9897c478bd9Sstevel@tonic-gate clock_t kmem_reap_interval; /* cache reaping rate [15 * HZ ticks] */ 9907c478bd9Sstevel@tonic-gate int kmem_depot_contention = 3; /* max failed tryenters per real interval */ 9917c478bd9Sstevel@tonic-gate pgcnt_t kmem_reapahead = 0; /* start reaping N pages before pageout */ 9927c478bd9Sstevel@tonic-gate int kmem_panic = 1; /* whether to panic on error */ 9937c478bd9Sstevel@tonic-gate int kmem_logging = 1; /* kmem_log_enter() override */ 9947c478bd9Sstevel@tonic-gate uint32_t kmem_mtbf = 0; /* mean time between failures [default: off] */ 9957c478bd9Sstevel@tonic-gate size_t kmem_transaction_log_size; /* transaction log size [2% of memory] */ 9967c478bd9Sstevel@tonic-gate size_t kmem_content_log_size; /* content log size [2% of memory] */ 9977c478bd9Sstevel@tonic-gate size_t kmem_failure_log_size; /* failure log [4 pages per CPU] */ 9987c478bd9Sstevel@tonic-gate size_t kmem_slab_log_size; /* slab create log [4 pages per CPU] */ 9997c478bd9Sstevel@tonic-gate size_t kmem_content_maxsave = 256; /* KMF_CONTENTS max bytes to log */ 10007c478bd9Sstevel@tonic-gate size_t kmem_lite_minsize = 0; /* minimum buffer size for KMF_LITE */ 10017c478bd9Sstevel@tonic-gate size_t kmem_lite_maxalign = 1024; /* maximum buffer alignment for KMF_LITE */ 10027c478bd9Sstevel@tonic-gate int kmem_lite_pcs = 4; /* number of PCs to store in KMF_LITE mode */ 10037c478bd9Sstevel@tonic-gate size_t kmem_maxverify; /* maximum bytes to inspect in debug routines */ 10047c478bd9Sstevel@tonic-gate size_t kmem_minfirewall; /* hardware-enforced redzone threshold */ 10057c478bd9Sstevel@tonic-gate 1006dce01e3fSJonathan W Adams #ifdef _LP64 1007dce01e3fSJonathan W Adams size_t kmem_max_cached = KMEM_BIG_MAXBUF; /* maximum kmem_alloc cache */ 1008dce01e3fSJonathan W Adams #else 1009dce01e3fSJonathan W Adams size_t kmem_max_cached = KMEM_BIG_MAXBUF_32BIT; /* maximum kmem_alloc cache */ 1010dce01e3fSJonathan W Adams #endif 1011dce01e3fSJonathan W Adams 10127c478bd9Sstevel@tonic-gate #ifdef DEBUG 10137c478bd9Sstevel@tonic-gate int kmem_flags = KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE | KMF_CONTENTS; 10147c478bd9Sstevel@tonic-gate #else 10157c478bd9Sstevel@tonic-gate int kmem_flags = 0; 10167c478bd9Sstevel@tonic-gate #endif 10177c478bd9Sstevel@tonic-gate int kmem_ready; 10187c478bd9Sstevel@tonic-gate 10197c478bd9Sstevel@tonic-gate static kmem_cache_t *kmem_slab_cache; 10207c478bd9Sstevel@tonic-gate static kmem_cache_t *kmem_bufctl_cache; 10217c478bd9Sstevel@tonic-gate static kmem_cache_t *kmem_bufctl_audit_cache; 10227c478bd9Sstevel@tonic-gate 10237c478bd9Sstevel@tonic-gate static kmutex_t kmem_cache_lock; /* inter-cache linkage only */ 1024b5fca8f8Stomee static list_t kmem_caches; 10257c478bd9Sstevel@tonic-gate 10267c478bd9Sstevel@tonic-gate static taskq_t *kmem_taskq; 10277c478bd9Sstevel@tonic-gate static kmutex_t kmem_flags_lock; 10287c478bd9Sstevel@tonic-gate static vmem_t *kmem_metadata_arena; 10297c478bd9Sstevel@tonic-gate static vmem_t *kmem_msb_arena; /* arena for metadata caches */ 10307c478bd9Sstevel@tonic-gate static vmem_t *kmem_cache_arena; 10317c478bd9Sstevel@tonic-gate static vmem_t *kmem_hash_arena; 10327c478bd9Sstevel@tonic-gate static vmem_t *kmem_log_arena; 10337c478bd9Sstevel@tonic-gate static vmem_t *kmem_oversize_arena; 10347c478bd9Sstevel@tonic-gate static vmem_t *kmem_va_arena; 10357c478bd9Sstevel@tonic-gate static vmem_t *kmem_default_arena; 10367c478bd9Sstevel@tonic-gate static vmem_t *kmem_firewall_va_arena; 10377c478bd9Sstevel@tonic-gate static vmem_t *kmem_firewall_arena; 10387c478bd9Sstevel@tonic-gate 1039b5fca8f8Stomee /* 1040b5fca8f8Stomee * Define KMEM_STATS to turn on statistic gathering. By default, it is only 1041b5fca8f8Stomee * turned on when DEBUG is also defined. 1042b5fca8f8Stomee */ 1043b5fca8f8Stomee #ifdef DEBUG 1044b5fca8f8Stomee #define KMEM_STATS 1045b5fca8f8Stomee #endif /* DEBUG */ 1046b5fca8f8Stomee 1047b5fca8f8Stomee #ifdef KMEM_STATS 1048b5fca8f8Stomee #define KMEM_STAT_ADD(stat) ((stat)++) 1049b5fca8f8Stomee #define KMEM_STAT_COND_ADD(cond, stat) ((void) (!(cond) || (stat)++)) 1050b5fca8f8Stomee #else 1051b5fca8f8Stomee #define KMEM_STAT_ADD(stat) /* nothing */ 1052b5fca8f8Stomee #define KMEM_STAT_COND_ADD(cond, stat) /* nothing */ 1053b5fca8f8Stomee #endif /* KMEM_STATS */ 1054b5fca8f8Stomee 1055b5fca8f8Stomee /* 1056b5fca8f8Stomee * kmem slab consolidator thresholds (tunables) 1057b5fca8f8Stomee */ 1058*686031edSTom Erickson size_t kmem_frag_minslabs = 101; /* minimum total slabs */ 1059*686031edSTom Erickson size_t kmem_frag_numer = 1; /* free buffers (numerator) */ 1060*686031edSTom Erickson size_t kmem_frag_denom = KMEM_VOID_FRACTION; /* buffers (denominator) */ 1061b5fca8f8Stomee /* 1062b5fca8f8Stomee * Maximum number of slabs from which to move buffers during a single 1063b5fca8f8Stomee * maintenance interval while the system is not low on memory. 1064b5fca8f8Stomee */ 1065*686031edSTom Erickson size_t kmem_reclaim_max_slabs = 1; 1066b5fca8f8Stomee /* 1067b5fca8f8Stomee * Number of slabs to scan backwards from the end of the partial slab list 1068b5fca8f8Stomee * when searching for buffers to relocate. 1069b5fca8f8Stomee */ 1070*686031edSTom Erickson size_t kmem_reclaim_scan_range = 12; 1071b5fca8f8Stomee 1072b5fca8f8Stomee #ifdef KMEM_STATS 1073b5fca8f8Stomee static struct { 1074b5fca8f8Stomee uint64_t kms_callbacks; 1075b5fca8f8Stomee uint64_t kms_yes; 1076b5fca8f8Stomee uint64_t kms_no; 1077b5fca8f8Stomee uint64_t kms_later; 1078b5fca8f8Stomee uint64_t kms_dont_need; 1079b5fca8f8Stomee uint64_t kms_dont_know; 1080b5fca8f8Stomee uint64_t kms_hunt_found_mag; 1081*686031edSTom Erickson uint64_t kms_hunt_found_slab; 1082b5fca8f8Stomee uint64_t kms_hunt_alloc_fail; 1083b5fca8f8Stomee uint64_t kms_hunt_lucky; 1084b5fca8f8Stomee uint64_t kms_notify; 1085b5fca8f8Stomee uint64_t kms_notify_callbacks; 1086b5fca8f8Stomee uint64_t kms_disbelief; 1087b5fca8f8Stomee uint64_t kms_already_pending; 1088b5fca8f8Stomee uint64_t kms_callback_alloc_fail; 108925e2c9cfStomee uint64_t kms_callback_taskq_fail; 1090*686031edSTom Erickson uint64_t kms_endscan_slab_dead; 1091b5fca8f8Stomee uint64_t kms_endscan_slab_destroyed; 1092b5fca8f8Stomee uint64_t kms_endscan_nomem; 1093b5fca8f8Stomee uint64_t kms_endscan_refcnt_changed; 1094b5fca8f8Stomee uint64_t kms_endscan_nomove_changed; 1095b5fca8f8Stomee uint64_t kms_endscan_freelist; 1096b5fca8f8Stomee uint64_t kms_avl_update; 1097b5fca8f8Stomee uint64_t kms_avl_noupdate; 1098b5fca8f8Stomee uint64_t kms_no_longer_reclaimable; 1099b5fca8f8Stomee uint64_t kms_notify_no_longer_reclaimable; 1100*686031edSTom Erickson uint64_t kms_notify_slab_dead; 1101*686031edSTom Erickson uint64_t kms_notify_slab_destroyed; 1102b5fca8f8Stomee uint64_t kms_alloc_fail; 1103b5fca8f8Stomee uint64_t kms_constructor_fail; 1104b5fca8f8Stomee uint64_t kms_dead_slabs_freed; 1105b5fca8f8Stomee uint64_t kms_defrags; 1106*686031edSTom Erickson uint64_t kms_scans; 1107b5fca8f8Stomee uint64_t kms_scan_depot_ws_reaps; 1108b5fca8f8Stomee uint64_t kms_debug_reaps; 1109*686031edSTom Erickson uint64_t kms_debug_scans; 1110b5fca8f8Stomee } kmem_move_stats; 1111b5fca8f8Stomee #endif /* KMEM_STATS */ 1112b5fca8f8Stomee 1113b5fca8f8Stomee /* consolidator knobs */ 1114b5fca8f8Stomee static boolean_t kmem_move_noreap; 1115b5fca8f8Stomee static boolean_t kmem_move_blocked; 1116b5fca8f8Stomee static boolean_t kmem_move_fulltilt; 1117b5fca8f8Stomee static boolean_t kmem_move_any_partial; 1118b5fca8f8Stomee 1119b5fca8f8Stomee #ifdef DEBUG 1120b5fca8f8Stomee /* 1121*686031edSTom Erickson * kmem consolidator debug tunables: 1122b5fca8f8Stomee * Ensure code coverage by occasionally running the consolidator even when the 1123b5fca8f8Stomee * caches are not fragmented (they may never be). These intervals are mean time 1124b5fca8f8Stomee * in cache maintenance intervals (kmem_cache_update). 1125b5fca8f8Stomee */ 1126*686031edSTom Erickson uint32_t kmem_mtb_move = 60; /* defrag 1 slab (~15min) */ 1127*686031edSTom Erickson uint32_t kmem_mtb_reap = 1800; /* defrag all slabs (~7.5hrs) */ 1128b5fca8f8Stomee #endif /* DEBUG */ 1129b5fca8f8Stomee 1130b5fca8f8Stomee static kmem_cache_t *kmem_defrag_cache; 1131b5fca8f8Stomee static kmem_cache_t *kmem_move_cache; 1132b5fca8f8Stomee static taskq_t *kmem_move_taskq; 1133b5fca8f8Stomee 1134b5fca8f8Stomee static void kmem_cache_scan(kmem_cache_t *); 1135b5fca8f8Stomee static void kmem_cache_defrag(kmem_cache_t *); 1136b5fca8f8Stomee 1137b5fca8f8Stomee 11387c478bd9Sstevel@tonic-gate kmem_log_header_t *kmem_transaction_log; 11397c478bd9Sstevel@tonic-gate kmem_log_header_t *kmem_content_log; 11407c478bd9Sstevel@tonic-gate kmem_log_header_t *kmem_failure_log; 11417c478bd9Sstevel@tonic-gate kmem_log_header_t *kmem_slab_log; 11427c478bd9Sstevel@tonic-gate 11437c478bd9Sstevel@tonic-gate static int kmem_lite_count; /* # of PCs in kmem_buftag_lite_t */ 11447c478bd9Sstevel@tonic-gate 11457c478bd9Sstevel@tonic-gate #define KMEM_BUFTAG_LITE_ENTER(bt, count, caller) \ 11467c478bd9Sstevel@tonic-gate if ((count) > 0) { \ 11477c478bd9Sstevel@tonic-gate pc_t *_s = ((kmem_buftag_lite_t *)(bt))->bt_history; \ 11487c478bd9Sstevel@tonic-gate pc_t *_e; \ 11497c478bd9Sstevel@tonic-gate /* memmove() the old entries down one notch */ \ 11507c478bd9Sstevel@tonic-gate for (_e = &_s[(count) - 1]; _e > _s; _e--) \ 11517c478bd9Sstevel@tonic-gate *_e = *(_e - 1); \ 11527c478bd9Sstevel@tonic-gate *_s = (uintptr_t)(caller); \ 11537c478bd9Sstevel@tonic-gate } 11547c478bd9Sstevel@tonic-gate 11557c478bd9Sstevel@tonic-gate #define KMERR_MODIFIED 0 /* buffer modified while on freelist */ 11567c478bd9Sstevel@tonic-gate #define KMERR_REDZONE 1 /* redzone violation (write past end of buf) */ 11577c478bd9Sstevel@tonic-gate #define KMERR_DUPFREE 2 /* freed a buffer twice */ 11587c478bd9Sstevel@tonic-gate #define KMERR_BADADDR 3 /* freed a bad (unallocated) address */ 11597c478bd9Sstevel@tonic-gate #define KMERR_BADBUFTAG 4 /* buftag corrupted */ 11607c478bd9Sstevel@tonic-gate #define KMERR_BADBUFCTL 5 /* bufctl corrupted */ 11617c478bd9Sstevel@tonic-gate #define KMERR_BADCACHE 6 /* freed a buffer to the wrong cache */ 11627c478bd9Sstevel@tonic-gate #define KMERR_BADSIZE 7 /* alloc size != free size */ 11637c478bd9Sstevel@tonic-gate #define KMERR_BADBASE 8 /* buffer base address wrong */ 11647c478bd9Sstevel@tonic-gate 11657c478bd9Sstevel@tonic-gate struct { 11667c478bd9Sstevel@tonic-gate hrtime_t kmp_timestamp; /* timestamp of panic */ 11677c478bd9Sstevel@tonic-gate int kmp_error; /* type of kmem error */ 11687c478bd9Sstevel@tonic-gate void *kmp_buffer; /* buffer that induced panic */ 11697c478bd9Sstevel@tonic-gate void *kmp_realbuf; /* real start address for buffer */ 11707c478bd9Sstevel@tonic-gate kmem_cache_t *kmp_cache; /* buffer's cache according to client */ 11717c478bd9Sstevel@tonic-gate kmem_cache_t *kmp_realcache; /* actual cache containing buffer */ 11727c478bd9Sstevel@tonic-gate kmem_slab_t *kmp_slab; /* slab accoring to kmem_findslab() */ 11737c478bd9Sstevel@tonic-gate kmem_bufctl_t *kmp_bufctl; /* bufctl */ 11747c478bd9Sstevel@tonic-gate } kmem_panic_info; 11757c478bd9Sstevel@tonic-gate 11767c478bd9Sstevel@tonic-gate 11777c478bd9Sstevel@tonic-gate static void 11787c478bd9Sstevel@tonic-gate copy_pattern(uint64_t pattern, void *buf_arg, size_t size) 11797c478bd9Sstevel@tonic-gate { 11807c478bd9Sstevel@tonic-gate uint64_t *bufend = (uint64_t *)((char *)buf_arg + size); 11817c478bd9Sstevel@tonic-gate uint64_t *buf = buf_arg; 11827c478bd9Sstevel@tonic-gate 11837c478bd9Sstevel@tonic-gate while (buf < bufend) 11847c478bd9Sstevel@tonic-gate *buf++ = pattern; 11857c478bd9Sstevel@tonic-gate } 11867c478bd9Sstevel@tonic-gate 11877c478bd9Sstevel@tonic-gate static void * 11887c478bd9Sstevel@tonic-gate verify_pattern(uint64_t pattern, void *buf_arg, size_t size) 11897c478bd9Sstevel@tonic-gate { 11907c478bd9Sstevel@tonic-gate uint64_t *bufend = (uint64_t *)((char *)buf_arg + size); 11917c478bd9Sstevel@tonic-gate uint64_t *buf; 11927c478bd9Sstevel@tonic-gate 11937c478bd9Sstevel@tonic-gate for (buf = buf_arg; buf < bufend; buf++) 11947c478bd9Sstevel@tonic-gate if (*buf != pattern) 11957c478bd9Sstevel@tonic-gate return (buf); 11967c478bd9Sstevel@tonic-gate return (NULL); 11977c478bd9Sstevel@tonic-gate } 11987c478bd9Sstevel@tonic-gate 11997c478bd9Sstevel@tonic-gate static void * 12007c478bd9Sstevel@tonic-gate verify_and_copy_pattern(uint64_t old, uint64_t new, void *buf_arg, size_t size) 12017c478bd9Sstevel@tonic-gate { 12027c478bd9Sstevel@tonic-gate uint64_t *bufend = (uint64_t *)((char *)buf_arg + size); 12037c478bd9Sstevel@tonic-gate uint64_t *buf; 12047c478bd9Sstevel@tonic-gate 12057c478bd9Sstevel@tonic-gate for (buf = buf_arg; buf < bufend; buf++) { 12067c478bd9Sstevel@tonic-gate if (*buf != old) { 12077c478bd9Sstevel@tonic-gate copy_pattern(old, buf_arg, 12089f1b636aStomee (char *)buf - (char *)buf_arg); 12097c478bd9Sstevel@tonic-gate return (buf); 12107c478bd9Sstevel@tonic-gate } 12117c478bd9Sstevel@tonic-gate *buf = new; 12127c478bd9Sstevel@tonic-gate } 12137c478bd9Sstevel@tonic-gate 12147c478bd9Sstevel@tonic-gate return (NULL); 12157c478bd9Sstevel@tonic-gate } 12167c478bd9Sstevel@tonic-gate 12177c478bd9Sstevel@tonic-gate static void 12187c478bd9Sstevel@tonic-gate kmem_cache_applyall(void (*func)(kmem_cache_t *), taskq_t *tq, int tqflag) 12197c478bd9Sstevel@tonic-gate { 12207c478bd9Sstevel@tonic-gate kmem_cache_t *cp; 12217c478bd9Sstevel@tonic-gate 12227c478bd9Sstevel@tonic-gate mutex_enter(&kmem_cache_lock); 1223b5fca8f8Stomee for (cp = list_head(&kmem_caches); cp != NULL; 1224b5fca8f8Stomee cp = list_next(&kmem_caches, cp)) 12257c478bd9Sstevel@tonic-gate if (tq != NULL) 12267c478bd9Sstevel@tonic-gate (void) taskq_dispatch(tq, (task_func_t *)func, cp, 12277c478bd9Sstevel@tonic-gate tqflag); 12287c478bd9Sstevel@tonic-gate else 12297c478bd9Sstevel@tonic-gate func(cp); 12307c478bd9Sstevel@tonic-gate mutex_exit(&kmem_cache_lock); 12317c478bd9Sstevel@tonic-gate } 12327c478bd9Sstevel@tonic-gate 12337c478bd9Sstevel@tonic-gate static void 12347c478bd9Sstevel@tonic-gate kmem_cache_applyall_id(void (*func)(kmem_cache_t *), taskq_t *tq, int tqflag) 12357c478bd9Sstevel@tonic-gate { 12367c478bd9Sstevel@tonic-gate kmem_cache_t *cp; 12377c478bd9Sstevel@tonic-gate 12387c478bd9Sstevel@tonic-gate mutex_enter(&kmem_cache_lock); 1239b5fca8f8Stomee for (cp = list_head(&kmem_caches); cp != NULL; 1240b5fca8f8Stomee cp = list_next(&kmem_caches, cp)) { 12417c478bd9Sstevel@tonic-gate if (!(cp->cache_cflags & KMC_IDENTIFIER)) 12427c478bd9Sstevel@tonic-gate continue; 12437c478bd9Sstevel@tonic-gate if (tq != NULL) 12447c478bd9Sstevel@tonic-gate (void) taskq_dispatch(tq, (task_func_t *)func, cp, 12457c478bd9Sstevel@tonic-gate tqflag); 12467c478bd9Sstevel@tonic-gate else 12477c478bd9Sstevel@tonic-gate func(cp); 12487c478bd9Sstevel@tonic-gate } 12497c478bd9Sstevel@tonic-gate mutex_exit(&kmem_cache_lock); 12507c478bd9Sstevel@tonic-gate } 12517c478bd9Sstevel@tonic-gate 12527c478bd9Sstevel@tonic-gate /* 12537c478bd9Sstevel@tonic-gate * Debugging support. Given a buffer address, find its slab. 12547c478bd9Sstevel@tonic-gate */ 12557c478bd9Sstevel@tonic-gate static kmem_slab_t * 12567c478bd9Sstevel@tonic-gate kmem_findslab(kmem_cache_t *cp, void *buf) 12577c478bd9Sstevel@tonic-gate { 12587c478bd9Sstevel@tonic-gate kmem_slab_t *sp; 12597c478bd9Sstevel@tonic-gate 12607c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_lock); 1261b5fca8f8Stomee for (sp = list_head(&cp->cache_complete_slabs); sp != NULL; 1262b5fca8f8Stomee sp = list_next(&cp->cache_complete_slabs, sp)) { 1263b5fca8f8Stomee if (KMEM_SLAB_MEMBER(sp, buf)) { 1264b5fca8f8Stomee mutex_exit(&cp->cache_lock); 1265b5fca8f8Stomee return (sp); 1266b5fca8f8Stomee } 1267b5fca8f8Stomee } 1268b5fca8f8Stomee for (sp = avl_first(&cp->cache_partial_slabs); sp != NULL; 1269b5fca8f8Stomee sp = AVL_NEXT(&cp->cache_partial_slabs, sp)) { 12707c478bd9Sstevel@tonic-gate if (KMEM_SLAB_MEMBER(sp, buf)) { 12717c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_lock); 12727c478bd9Sstevel@tonic-gate return (sp); 12737c478bd9Sstevel@tonic-gate } 12747c478bd9Sstevel@tonic-gate } 12757c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_lock); 12767c478bd9Sstevel@tonic-gate 12777c478bd9Sstevel@tonic-gate return (NULL); 12787c478bd9Sstevel@tonic-gate } 12797c478bd9Sstevel@tonic-gate 12807c478bd9Sstevel@tonic-gate static void 12817c478bd9Sstevel@tonic-gate kmem_error(int error, kmem_cache_t *cparg, void *bufarg) 12827c478bd9Sstevel@tonic-gate { 12837c478bd9Sstevel@tonic-gate kmem_buftag_t *btp = NULL; 12847c478bd9Sstevel@tonic-gate kmem_bufctl_t *bcp = NULL; 12857c478bd9Sstevel@tonic-gate kmem_cache_t *cp = cparg; 12867c478bd9Sstevel@tonic-gate kmem_slab_t *sp; 12877c478bd9Sstevel@tonic-gate uint64_t *off; 12887c478bd9Sstevel@tonic-gate void *buf = bufarg; 12897c478bd9Sstevel@tonic-gate 12907c478bd9Sstevel@tonic-gate kmem_logging = 0; /* stop logging when a bad thing happens */ 12917c478bd9Sstevel@tonic-gate 12927c478bd9Sstevel@tonic-gate kmem_panic_info.kmp_timestamp = gethrtime(); 12937c478bd9Sstevel@tonic-gate 12947c478bd9Sstevel@tonic-gate sp = kmem_findslab(cp, buf); 12957c478bd9Sstevel@tonic-gate if (sp == NULL) { 1296b5fca8f8Stomee for (cp = list_tail(&kmem_caches); cp != NULL; 1297b5fca8f8Stomee cp = list_prev(&kmem_caches, cp)) { 12987c478bd9Sstevel@tonic-gate if ((sp = kmem_findslab(cp, buf)) != NULL) 12997c478bd9Sstevel@tonic-gate break; 13007c478bd9Sstevel@tonic-gate } 13017c478bd9Sstevel@tonic-gate } 13027c478bd9Sstevel@tonic-gate 13037c478bd9Sstevel@tonic-gate if (sp == NULL) { 13047c478bd9Sstevel@tonic-gate cp = NULL; 13057c478bd9Sstevel@tonic-gate error = KMERR_BADADDR; 13067c478bd9Sstevel@tonic-gate } else { 13077c478bd9Sstevel@tonic-gate if (cp != cparg) 13087c478bd9Sstevel@tonic-gate error = KMERR_BADCACHE; 13097c478bd9Sstevel@tonic-gate else 13107c478bd9Sstevel@tonic-gate buf = (char *)bufarg - ((uintptr_t)bufarg - 13117c478bd9Sstevel@tonic-gate (uintptr_t)sp->slab_base) % cp->cache_chunksize; 13127c478bd9Sstevel@tonic-gate if (buf != bufarg) 13137c478bd9Sstevel@tonic-gate error = KMERR_BADBASE; 13147c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_BUFTAG) 13157c478bd9Sstevel@tonic-gate btp = KMEM_BUFTAG(cp, buf); 13167c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_HASH) { 13177c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_lock); 13187c478bd9Sstevel@tonic-gate for (bcp = *KMEM_HASH(cp, buf); bcp; bcp = bcp->bc_next) 13197c478bd9Sstevel@tonic-gate if (bcp->bc_addr == buf) 13207c478bd9Sstevel@tonic-gate break; 13217c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_lock); 13227c478bd9Sstevel@tonic-gate if (bcp == NULL && btp != NULL) 13237c478bd9Sstevel@tonic-gate bcp = btp->bt_bufctl; 13247c478bd9Sstevel@tonic-gate if (kmem_findslab(cp->cache_bufctl_cache, bcp) == 13257c478bd9Sstevel@tonic-gate NULL || P2PHASE((uintptr_t)bcp, KMEM_ALIGN) || 13267c478bd9Sstevel@tonic-gate bcp->bc_addr != buf) { 13277c478bd9Sstevel@tonic-gate error = KMERR_BADBUFCTL; 13287c478bd9Sstevel@tonic-gate bcp = NULL; 13297c478bd9Sstevel@tonic-gate } 13307c478bd9Sstevel@tonic-gate } 13317c478bd9Sstevel@tonic-gate } 13327c478bd9Sstevel@tonic-gate 13337c478bd9Sstevel@tonic-gate kmem_panic_info.kmp_error = error; 13347c478bd9Sstevel@tonic-gate kmem_panic_info.kmp_buffer = bufarg; 13357c478bd9Sstevel@tonic-gate kmem_panic_info.kmp_realbuf = buf; 13367c478bd9Sstevel@tonic-gate kmem_panic_info.kmp_cache = cparg; 13377c478bd9Sstevel@tonic-gate kmem_panic_info.kmp_realcache = cp; 13387c478bd9Sstevel@tonic-gate kmem_panic_info.kmp_slab = sp; 13397c478bd9Sstevel@tonic-gate kmem_panic_info.kmp_bufctl = bcp; 13407c478bd9Sstevel@tonic-gate 13417c478bd9Sstevel@tonic-gate printf("kernel memory allocator: "); 13427c478bd9Sstevel@tonic-gate 13437c478bd9Sstevel@tonic-gate switch (error) { 13447c478bd9Sstevel@tonic-gate 13457c478bd9Sstevel@tonic-gate case KMERR_MODIFIED: 13467c478bd9Sstevel@tonic-gate printf("buffer modified after being freed\n"); 13477c478bd9Sstevel@tonic-gate off = verify_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 13487c478bd9Sstevel@tonic-gate if (off == NULL) /* shouldn't happen */ 13497c478bd9Sstevel@tonic-gate off = buf; 13507c478bd9Sstevel@tonic-gate printf("modification occurred at offset 0x%lx " 13517c478bd9Sstevel@tonic-gate "(0x%llx replaced by 0x%llx)\n", 13527c478bd9Sstevel@tonic-gate (uintptr_t)off - (uintptr_t)buf, 13537c478bd9Sstevel@tonic-gate (longlong_t)KMEM_FREE_PATTERN, (longlong_t)*off); 13547c478bd9Sstevel@tonic-gate break; 13557c478bd9Sstevel@tonic-gate 13567c478bd9Sstevel@tonic-gate case KMERR_REDZONE: 13577c478bd9Sstevel@tonic-gate printf("redzone violation: write past end of buffer\n"); 13587c478bd9Sstevel@tonic-gate break; 13597c478bd9Sstevel@tonic-gate 13607c478bd9Sstevel@tonic-gate case KMERR_BADADDR: 13617c478bd9Sstevel@tonic-gate printf("invalid free: buffer not in cache\n"); 13627c478bd9Sstevel@tonic-gate break; 13637c478bd9Sstevel@tonic-gate 13647c478bd9Sstevel@tonic-gate case KMERR_DUPFREE: 13657c478bd9Sstevel@tonic-gate printf("duplicate free: buffer freed twice\n"); 13667c478bd9Sstevel@tonic-gate break; 13677c478bd9Sstevel@tonic-gate 13687c478bd9Sstevel@tonic-gate case KMERR_BADBUFTAG: 13697c478bd9Sstevel@tonic-gate printf("boundary tag corrupted\n"); 13707c478bd9Sstevel@tonic-gate printf("bcp ^ bxstat = %lx, should be %lx\n", 13717c478bd9Sstevel@tonic-gate (intptr_t)btp->bt_bufctl ^ btp->bt_bxstat, 13727c478bd9Sstevel@tonic-gate KMEM_BUFTAG_FREE); 13737c478bd9Sstevel@tonic-gate break; 13747c478bd9Sstevel@tonic-gate 13757c478bd9Sstevel@tonic-gate case KMERR_BADBUFCTL: 13767c478bd9Sstevel@tonic-gate printf("bufctl corrupted\n"); 13777c478bd9Sstevel@tonic-gate break; 13787c478bd9Sstevel@tonic-gate 13797c478bd9Sstevel@tonic-gate case KMERR_BADCACHE: 13807c478bd9Sstevel@tonic-gate printf("buffer freed to wrong cache\n"); 13817c478bd9Sstevel@tonic-gate printf("buffer was allocated from %s,\n", cp->cache_name); 13827c478bd9Sstevel@tonic-gate printf("caller attempting free to %s.\n", cparg->cache_name); 13837c478bd9Sstevel@tonic-gate break; 13847c478bd9Sstevel@tonic-gate 13857c478bd9Sstevel@tonic-gate case KMERR_BADSIZE: 13867c478bd9Sstevel@tonic-gate printf("bad free: free size (%u) != alloc size (%u)\n", 13877c478bd9Sstevel@tonic-gate KMEM_SIZE_DECODE(((uint32_t *)btp)[0]), 13887c478bd9Sstevel@tonic-gate KMEM_SIZE_DECODE(((uint32_t *)btp)[1])); 13897c478bd9Sstevel@tonic-gate break; 13907c478bd9Sstevel@tonic-gate 13917c478bd9Sstevel@tonic-gate case KMERR_BADBASE: 13927c478bd9Sstevel@tonic-gate printf("bad free: free address (%p) != alloc address (%p)\n", 13937c478bd9Sstevel@tonic-gate bufarg, buf); 13947c478bd9Sstevel@tonic-gate break; 13957c478bd9Sstevel@tonic-gate } 13967c478bd9Sstevel@tonic-gate 13977c478bd9Sstevel@tonic-gate printf("buffer=%p bufctl=%p cache: %s\n", 13987c478bd9Sstevel@tonic-gate bufarg, (void *)bcp, cparg->cache_name); 13997c478bd9Sstevel@tonic-gate 14007c478bd9Sstevel@tonic-gate if (bcp != NULL && (cp->cache_flags & KMF_AUDIT) && 14017c478bd9Sstevel@tonic-gate error != KMERR_BADBUFCTL) { 14027c478bd9Sstevel@tonic-gate int d; 14037c478bd9Sstevel@tonic-gate timestruc_t ts; 14047c478bd9Sstevel@tonic-gate kmem_bufctl_audit_t *bcap = (kmem_bufctl_audit_t *)bcp; 14057c478bd9Sstevel@tonic-gate 14067c478bd9Sstevel@tonic-gate hrt2ts(kmem_panic_info.kmp_timestamp - bcap->bc_timestamp, &ts); 14077c478bd9Sstevel@tonic-gate printf("previous transaction on buffer %p:\n", buf); 14087c478bd9Sstevel@tonic-gate printf("thread=%p time=T-%ld.%09ld slab=%p cache: %s\n", 14097c478bd9Sstevel@tonic-gate (void *)bcap->bc_thread, ts.tv_sec, ts.tv_nsec, 14107c478bd9Sstevel@tonic-gate (void *)sp, cp->cache_name); 14117c478bd9Sstevel@tonic-gate for (d = 0; d < MIN(bcap->bc_depth, KMEM_STACK_DEPTH); d++) { 14127c478bd9Sstevel@tonic-gate ulong_t off; 14137c478bd9Sstevel@tonic-gate char *sym = kobj_getsymname(bcap->bc_stack[d], &off); 14147c478bd9Sstevel@tonic-gate printf("%s+%lx\n", sym ? sym : "?", off); 14157c478bd9Sstevel@tonic-gate } 14167c478bd9Sstevel@tonic-gate } 14177c478bd9Sstevel@tonic-gate if (kmem_panic > 0) 14187c478bd9Sstevel@tonic-gate panic("kernel heap corruption detected"); 14197c478bd9Sstevel@tonic-gate if (kmem_panic == 0) 14207c478bd9Sstevel@tonic-gate debug_enter(NULL); 14217c478bd9Sstevel@tonic-gate kmem_logging = 1; /* resume logging */ 14227c478bd9Sstevel@tonic-gate } 14237c478bd9Sstevel@tonic-gate 14247c478bd9Sstevel@tonic-gate static kmem_log_header_t * 14257c478bd9Sstevel@tonic-gate kmem_log_init(size_t logsize) 14267c478bd9Sstevel@tonic-gate { 14277c478bd9Sstevel@tonic-gate kmem_log_header_t *lhp; 14287c478bd9Sstevel@tonic-gate int nchunks = 4 * max_ncpus; 14297c478bd9Sstevel@tonic-gate size_t lhsize = (size_t)&((kmem_log_header_t *)0)->lh_cpu[max_ncpus]; 14307c478bd9Sstevel@tonic-gate int i; 14317c478bd9Sstevel@tonic-gate 14327c478bd9Sstevel@tonic-gate /* 14337c478bd9Sstevel@tonic-gate * Make sure that lhp->lh_cpu[] is nicely aligned 14347c478bd9Sstevel@tonic-gate * to prevent false sharing of cache lines. 14357c478bd9Sstevel@tonic-gate */ 14367c478bd9Sstevel@tonic-gate lhsize = P2ROUNDUP(lhsize, KMEM_ALIGN); 14377c478bd9Sstevel@tonic-gate lhp = vmem_xalloc(kmem_log_arena, lhsize, 64, P2NPHASE(lhsize, 64), 0, 14387c478bd9Sstevel@tonic-gate NULL, NULL, VM_SLEEP); 14397c478bd9Sstevel@tonic-gate bzero(lhp, lhsize); 14407c478bd9Sstevel@tonic-gate 14417c478bd9Sstevel@tonic-gate mutex_init(&lhp->lh_lock, NULL, MUTEX_DEFAULT, NULL); 14427c478bd9Sstevel@tonic-gate lhp->lh_nchunks = nchunks; 14437c478bd9Sstevel@tonic-gate lhp->lh_chunksize = P2ROUNDUP(logsize / nchunks + 1, PAGESIZE); 14447c478bd9Sstevel@tonic-gate lhp->lh_base = vmem_alloc(kmem_log_arena, 14457c478bd9Sstevel@tonic-gate lhp->lh_chunksize * nchunks, VM_SLEEP); 14467c478bd9Sstevel@tonic-gate lhp->lh_free = vmem_alloc(kmem_log_arena, 14477c478bd9Sstevel@tonic-gate nchunks * sizeof (int), VM_SLEEP); 14487c478bd9Sstevel@tonic-gate bzero(lhp->lh_base, lhp->lh_chunksize * nchunks); 14497c478bd9Sstevel@tonic-gate 14507c478bd9Sstevel@tonic-gate for (i = 0; i < max_ncpus; i++) { 14517c478bd9Sstevel@tonic-gate kmem_cpu_log_header_t *clhp = &lhp->lh_cpu[i]; 14527c478bd9Sstevel@tonic-gate mutex_init(&clhp->clh_lock, NULL, MUTEX_DEFAULT, NULL); 14537c478bd9Sstevel@tonic-gate clhp->clh_chunk = i; 14547c478bd9Sstevel@tonic-gate } 14557c478bd9Sstevel@tonic-gate 14567c478bd9Sstevel@tonic-gate for (i = max_ncpus; i < nchunks; i++) 14577c478bd9Sstevel@tonic-gate lhp->lh_free[i] = i; 14587c478bd9Sstevel@tonic-gate 14597c478bd9Sstevel@tonic-gate lhp->lh_head = max_ncpus; 14607c478bd9Sstevel@tonic-gate lhp->lh_tail = 0; 14617c478bd9Sstevel@tonic-gate 14627c478bd9Sstevel@tonic-gate return (lhp); 14637c478bd9Sstevel@tonic-gate } 14647c478bd9Sstevel@tonic-gate 14657c478bd9Sstevel@tonic-gate static void * 14667c478bd9Sstevel@tonic-gate kmem_log_enter(kmem_log_header_t *lhp, void *data, size_t size) 14677c478bd9Sstevel@tonic-gate { 14687c478bd9Sstevel@tonic-gate void *logspace; 14697c478bd9Sstevel@tonic-gate kmem_cpu_log_header_t *clhp = &lhp->lh_cpu[CPU->cpu_seqid]; 14707c478bd9Sstevel@tonic-gate 14717c478bd9Sstevel@tonic-gate if (lhp == NULL || kmem_logging == 0 || panicstr) 14727c478bd9Sstevel@tonic-gate return (NULL); 14737c478bd9Sstevel@tonic-gate 14747c478bd9Sstevel@tonic-gate mutex_enter(&clhp->clh_lock); 14757c478bd9Sstevel@tonic-gate clhp->clh_hits++; 14767c478bd9Sstevel@tonic-gate if (size > clhp->clh_avail) { 14777c478bd9Sstevel@tonic-gate mutex_enter(&lhp->lh_lock); 14787c478bd9Sstevel@tonic-gate lhp->lh_hits++; 14797c478bd9Sstevel@tonic-gate lhp->lh_free[lhp->lh_tail] = clhp->clh_chunk; 14807c478bd9Sstevel@tonic-gate lhp->lh_tail = (lhp->lh_tail + 1) % lhp->lh_nchunks; 14817c478bd9Sstevel@tonic-gate clhp->clh_chunk = lhp->lh_free[lhp->lh_head]; 14827c478bd9Sstevel@tonic-gate lhp->lh_head = (lhp->lh_head + 1) % lhp->lh_nchunks; 14837c478bd9Sstevel@tonic-gate clhp->clh_current = lhp->lh_base + 14849f1b636aStomee clhp->clh_chunk * lhp->lh_chunksize; 14857c478bd9Sstevel@tonic-gate clhp->clh_avail = lhp->lh_chunksize; 14867c478bd9Sstevel@tonic-gate if (size > lhp->lh_chunksize) 14877c478bd9Sstevel@tonic-gate size = lhp->lh_chunksize; 14887c478bd9Sstevel@tonic-gate mutex_exit(&lhp->lh_lock); 14897c478bd9Sstevel@tonic-gate } 14907c478bd9Sstevel@tonic-gate logspace = clhp->clh_current; 14917c478bd9Sstevel@tonic-gate clhp->clh_current += size; 14927c478bd9Sstevel@tonic-gate clhp->clh_avail -= size; 14937c478bd9Sstevel@tonic-gate bcopy(data, logspace, size); 14947c478bd9Sstevel@tonic-gate mutex_exit(&clhp->clh_lock); 14957c478bd9Sstevel@tonic-gate return (logspace); 14967c478bd9Sstevel@tonic-gate } 14977c478bd9Sstevel@tonic-gate 14987c478bd9Sstevel@tonic-gate #define KMEM_AUDIT(lp, cp, bcp) \ 14997c478bd9Sstevel@tonic-gate { \ 15007c478bd9Sstevel@tonic-gate kmem_bufctl_audit_t *_bcp = (kmem_bufctl_audit_t *)(bcp); \ 15017c478bd9Sstevel@tonic-gate _bcp->bc_timestamp = gethrtime(); \ 15027c478bd9Sstevel@tonic-gate _bcp->bc_thread = curthread; \ 15037c478bd9Sstevel@tonic-gate _bcp->bc_depth = getpcstack(_bcp->bc_stack, KMEM_STACK_DEPTH); \ 15047c478bd9Sstevel@tonic-gate _bcp->bc_lastlog = kmem_log_enter((lp), _bcp, sizeof (*_bcp)); \ 15057c478bd9Sstevel@tonic-gate } 15067c478bd9Sstevel@tonic-gate 15077c478bd9Sstevel@tonic-gate static void 15087c478bd9Sstevel@tonic-gate kmem_log_event(kmem_log_header_t *lp, kmem_cache_t *cp, 15097c478bd9Sstevel@tonic-gate kmem_slab_t *sp, void *addr) 15107c478bd9Sstevel@tonic-gate { 15117c478bd9Sstevel@tonic-gate kmem_bufctl_audit_t bca; 15127c478bd9Sstevel@tonic-gate 15137c478bd9Sstevel@tonic-gate bzero(&bca, sizeof (kmem_bufctl_audit_t)); 15147c478bd9Sstevel@tonic-gate bca.bc_addr = addr; 15157c478bd9Sstevel@tonic-gate bca.bc_slab = sp; 15167c478bd9Sstevel@tonic-gate bca.bc_cache = cp; 15177c478bd9Sstevel@tonic-gate KMEM_AUDIT(lp, cp, &bca); 15187c478bd9Sstevel@tonic-gate } 15197c478bd9Sstevel@tonic-gate 15207c478bd9Sstevel@tonic-gate /* 15217c478bd9Sstevel@tonic-gate * Create a new slab for cache cp. 15227c478bd9Sstevel@tonic-gate */ 15237c478bd9Sstevel@tonic-gate static kmem_slab_t * 15247c478bd9Sstevel@tonic-gate kmem_slab_create(kmem_cache_t *cp, int kmflag) 15257c478bd9Sstevel@tonic-gate { 15267c478bd9Sstevel@tonic-gate size_t slabsize = cp->cache_slabsize; 15277c478bd9Sstevel@tonic-gate size_t chunksize = cp->cache_chunksize; 15287c478bd9Sstevel@tonic-gate int cache_flags = cp->cache_flags; 15297c478bd9Sstevel@tonic-gate size_t color, chunks; 15307c478bd9Sstevel@tonic-gate char *buf, *slab; 15317c478bd9Sstevel@tonic-gate kmem_slab_t *sp; 15327c478bd9Sstevel@tonic-gate kmem_bufctl_t *bcp; 15337c478bd9Sstevel@tonic-gate vmem_t *vmp = cp->cache_arena; 15347c478bd9Sstevel@tonic-gate 1535b5fca8f8Stomee ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 1536b5fca8f8Stomee 15377c478bd9Sstevel@tonic-gate color = cp->cache_color + cp->cache_align; 15387c478bd9Sstevel@tonic-gate if (color > cp->cache_maxcolor) 15397c478bd9Sstevel@tonic-gate color = cp->cache_mincolor; 15407c478bd9Sstevel@tonic-gate cp->cache_color = color; 15417c478bd9Sstevel@tonic-gate 15427c478bd9Sstevel@tonic-gate slab = vmem_alloc(vmp, slabsize, kmflag & KM_VMFLAGS); 15437c478bd9Sstevel@tonic-gate 15447c478bd9Sstevel@tonic-gate if (slab == NULL) 15457c478bd9Sstevel@tonic-gate goto vmem_alloc_failure; 15467c478bd9Sstevel@tonic-gate 15477c478bd9Sstevel@tonic-gate ASSERT(P2PHASE((uintptr_t)slab, vmp->vm_quantum) == 0); 15487c478bd9Sstevel@tonic-gate 1549b5fca8f8Stomee /* 1550b5fca8f8Stomee * Reverify what was already checked in kmem_cache_set_move(), since the 1551b5fca8f8Stomee * consolidator depends (for correctness) on slabs being initialized 1552b5fca8f8Stomee * with the 0xbaddcafe memory pattern (setting a low order bit usable by 1553b5fca8f8Stomee * clients to distinguish uninitialized memory from known objects). 1554b5fca8f8Stomee */ 1555b5fca8f8Stomee ASSERT((cp->cache_move == NULL) || !(cp->cache_cflags & KMC_NOTOUCH)); 15567c478bd9Sstevel@tonic-gate if (!(cp->cache_cflags & KMC_NOTOUCH)) 15577c478bd9Sstevel@tonic-gate copy_pattern(KMEM_UNINITIALIZED_PATTERN, slab, slabsize); 15587c478bd9Sstevel@tonic-gate 15597c478bd9Sstevel@tonic-gate if (cache_flags & KMF_HASH) { 15607c478bd9Sstevel@tonic-gate if ((sp = kmem_cache_alloc(kmem_slab_cache, kmflag)) == NULL) 15617c478bd9Sstevel@tonic-gate goto slab_alloc_failure; 15627c478bd9Sstevel@tonic-gate chunks = (slabsize - color) / chunksize; 15637c478bd9Sstevel@tonic-gate } else { 15647c478bd9Sstevel@tonic-gate sp = KMEM_SLAB(cp, slab); 15657c478bd9Sstevel@tonic-gate chunks = (slabsize - sizeof (kmem_slab_t) - color) / chunksize; 15667c478bd9Sstevel@tonic-gate } 15677c478bd9Sstevel@tonic-gate 15687c478bd9Sstevel@tonic-gate sp->slab_cache = cp; 15697c478bd9Sstevel@tonic-gate sp->slab_head = NULL; 15707c478bd9Sstevel@tonic-gate sp->slab_refcnt = 0; 15717c478bd9Sstevel@tonic-gate sp->slab_base = buf = slab + color; 15727c478bd9Sstevel@tonic-gate sp->slab_chunks = chunks; 1573b5fca8f8Stomee sp->slab_stuck_offset = (uint32_t)-1; 1574b5fca8f8Stomee sp->slab_later_count = 0; 1575b5fca8f8Stomee sp->slab_flags = 0; 15767c478bd9Sstevel@tonic-gate 15777c478bd9Sstevel@tonic-gate ASSERT(chunks > 0); 15787c478bd9Sstevel@tonic-gate while (chunks-- != 0) { 15797c478bd9Sstevel@tonic-gate if (cache_flags & KMF_HASH) { 15807c478bd9Sstevel@tonic-gate bcp = kmem_cache_alloc(cp->cache_bufctl_cache, kmflag); 15817c478bd9Sstevel@tonic-gate if (bcp == NULL) 15827c478bd9Sstevel@tonic-gate goto bufctl_alloc_failure; 15837c478bd9Sstevel@tonic-gate if (cache_flags & KMF_AUDIT) { 15847c478bd9Sstevel@tonic-gate kmem_bufctl_audit_t *bcap = 15857c478bd9Sstevel@tonic-gate (kmem_bufctl_audit_t *)bcp; 15867c478bd9Sstevel@tonic-gate bzero(bcap, sizeof (kmem_bufctl_audit_t)); 15877c478bd9Sstevel@tonic-gate bcap->bc_cache = cp; 15887c478bd9Sstevel@tonic-gate } 15897c478bd9Sstevel@tonic-gate bcp->bc_addr = buf; 15907c478bd9Sstevel@tonic-gate bcp->bc_slab = sp; 15917c478bd9Sstevel@tonic-gate } else { 15927c478bd9Sstevel@tonic-gate bcp = KMEM_BUFCTL(cp, buf); 15937c478bd9Sstevel@tonic-gate } 15947c478bd9Sstevel@tonic-gate if (cache_flags & KMF_BUFTAG) { 15957c478bd9Sstevel@tonic-gate kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 15967c478bd9Sstevel@tonic-gate btp->bt_redzone = KMEM_REDZONE_PATTERN; 15977c478bd9Sstevel@tonic-gate btp->bt_bufctl = bcp; 15987c478bd9Sstevel@tonic-gate btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE; 15997c478bd9Sstevel@tonic-gate if (cache_flags & KMF_DEADBEEF) { 16007c478bd9Sstevel@tonic-gate copy_pattern(KMEM_FREE_PATTERN, buf, 16017c478bd9Sstevel@tonic-gate cp->cache_verify); 16027c478bd9Sstevel@tonic-gate } 16037c478bd9Sstevel@tonic-gate } 16047c478bd9Sstevel@tonic-gate bcp->bc_next = sp->slab_head; 16057c478bd9Sstevel@tonic-gate sp->slab_head = bcp; 16067c478bd9Sstevel@tonic-gate buf += chunksize; 16077c478bd9Sstevel@tonic-gate } 16087c478bd9Sstevel@tonic-gate 16097c478bd9Sstevel@tonic-gate kmem_log_event(kmem_slab_log, cp, sp, slab); 16107c478bd9Sstevel@tonic-gate 16117c478bd9Sstevel@tonic-gate return (sp); 16127c478bd9Sstevel@tonic-gate 16137c478bd9Sstevel@tonic-gate bufctl_alloc_failure: 16147c478bd9Sstevel@tonic-gate 16157c478bd9Sstevel@tonic-gate while ((bcp = sp->slab_head) != NULL) { 16167c478bd9Sstevel@tonic-gate sp->slab_head = bcp->bc_next; 16177c478bd9Sstevel@tonic-gate kmem_cache_free(cp->cache_bufctl_cache, bcp); 16187c478bd9Sstevel@tonic-gate } 16197c478bd9Sstevel@tonic-gate kmem_cache_free(kmem_slab_cache, sp); 16207c478bd9Sstevel@tonic-gate 16217c478bd9Sstevel@tonic-gate slab_alloc_failure: 16227c478bd9Sstevel@tonic-gate 16237c478bd9Sstevel@tonic-gate vmem_free(vmp, slab, slabsize); 16247c478bd9Sstevel@tonic-gate 16257c478bd9Sstevel@tonic-gate vmem_alloc_failure: 16267c478bd9Sstevel@tonic-gate 16277c478bd9Sstevel@tonic-gate kmem_log_event(kmem_failure_log, cp, NULL, NULL); 16287c478bd9Sstevel@tonic-gate atomic_add_64(&cp->cache_alloc_fail, 1); 16297c478bd9Sstevel@tonic-gate 16307c478bd9Sstevel@tonic-gate return (NULL); 16317c478bd9Sstevel@tonic-gate } 16327c478bd9Sstevel@tonic-gate 16337c478bd9Sstevel@tonic-gate /* 16347c478bd9Sstevel@tonic-gate * Destroy a slab. 16357c478bd9Sstevel@tonic-gate */ 16367c478bd9Sstevel@tonic-gate static void 16377c478bd9Sstevel@tonic-gate kmem_slab_destroy(kmem_cache_t *cp, kmem_slab_t *sp) 16387c478bd9Sstevel@tonic-gate { 16397c478bd9Sstevel@tonic-gate vmem_t *vmp = cp->cache_arena; 16407c478bd9Sstevel@tonic-gate void *slab = (void *)P2ALIGN((uintptr_t)sp->slab_base, vmp->vm_quantum); 16417c478bd9Sstevel@tonic-gate 1642b5fca8f8Stomee ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 1643b5fca8f8Stomee ASSERT(sp->slab_refcnt == 0); 1644b5fca8f8Stomee 16457c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_HASH) { 16467c478bd9Sstevel@tonic-gate kmem_bufctl_t *bcp; 16477c478bd9Sstevel@tonic-gate while ((bcp = sp->slab_head) != NULL) { 16487c478bd9Sstevel@tonic-gate sp->slab_head = bcp->bc_next; 16497c478bd9Sstevel@tonic-gate kmem_cache_free(cp->cache_bufctl_cache, bcp); 16507c478bd9Sstevel@tonic-gate } 16517c478bd9Sstevel@tonic-gate kmem_cache_free(kmem_slab_cache, sp); 16527c478bd9Sstevel@tonic-gate } 16537c478bd9Sstevel@tonic-gate vmem_free(vmp, slab, cp->cache_slabsize); 16547c478bd9Sstevel@tonic-gate } 16557c478bd9Sstevel@tonic-gate 16567c478bd9Sstevel@tonic-gate static void * 1657b5fca8f8Stomee kmem_slab_alloc_impl(kmem_cache_t *cp, kmem_slab_t *sp) 16587c478bd9Sstevel@tonic-gate { 16597c478bd9Sstevel@tonic-gate kmem_bufctl_t *bcp, **hash_bucket; 16607c478bd9Sstevel@tonic-gate void *buf; 16617c478bd9Sstevel@tonic-gate 1662b5fca8f8Stomee ASSERT(MUTEX_HELD(&cp->cache_lock)); 1663b5fca8f8Stomee /* 1664b5fca8f8Stomee * kmem_slab_alloc() drops cache_lock when it creates a new slab, so we 1665b5fca8f8Stomee * can't ASSERT(avl_is_empty(&cp->cache_partial_slabs)) here when the 1666b5fca8f8Stomee * slab is newly created (sp->slab_refcnt == 0). 1667b5fca8f8Stomee */ 1668b5fca8f8Stomee ASSERT((sp->slab_refcnt == 0) || (KMEM_SLAB_IS_PARTIAL(sp) && 1669b5fca8f8Stomee (sp == avl_first(&cp->cache_partial_slabs)))); 16707c478bd9Sstevel@tonic-gate ASSERT(sp->slab_cache == cp); 16717c478bd9Sstevel@tonic-gate 1672b5fca8f8Stomee cp->cache_slab_alloc++; 16739f1b636aStomee cp->cache_bufslab--; 16747c478bd9Sstevel@tonic-gate sp->slab_refcnt++; 16757c478bd9Sstevel@tonic-gate 16767c478bd9Sstevel@tonic-gate bcp = sp->slab_head; 16777c478bd9Sstevel@tonic-gate if ((sp->slab_head = bcp->bc_next) == NULL) { 1678b5fca8f8Stomee ASSERT(KMEM_SLAB_IS_ALL_USED(sp)); 1679b5fca8f8Stomee if (sp->slab_refcnt == 1) { 1680b5fca8f8Stomee ASSERT(sp->slab_chunks == 1); 1681b5fca8f8Stomee } else { 1682b5fca8f8Stomee ASSERT(sp->slab_chunks > 1); /* the slab was partial */ 1683b5fca8f8Stomee avl_remove(&cp->cache_partial_slabs, sp); 1684b5fca8f8Stomee sp->slab_later_count = 0; /* clear history */ 1685b5fca8f8Stomee sp->slab_flags &= ~KMEM_SLAB_NOMOVE; 1686b5fca8f8Stomee sp->slab_stuck_offset = (uint32_t)-1; 1687b5fca8f8Stomee } 1688b5fca8f8Stomee list_insert_head(&cp->cache_complete_slabs, sp); 1689b5fca8f8Stomee cp->cache_complete_slab_count++; 1690b5fca8f8Stomee } else { 1691b5fca8f8Stomee ASSERT(KMEM_SLAB_IS_PARTIAL(sp)); 1692b5fca8f8Stomee if (sp->slab_refcnt == 1) { 1693b5fca8f8Stomee avl_add(&cp->cache_partial_slabs, sp); 1694b5fca8f8Stomee } else { 1695b5fca8f8Stomee /* 1696b5fca8f8Stomee * The slab is now more allocated than it was, so the 1697b5fca8f8Stomee * order remains unchanged. 1698b5fca8f8Stomee */ 1699b5fca8f8Stomee ASSERT(!avl_update(&cp->cache_partial_slabs, sp)); 1700b5fca8f8Stomee } 17017c478bd9Sstevel@tonic-gate } 17027c478bd9Sstevel@tonic-gate 17037c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_HASH) { 17047c478bd9Sstevel@tonic-gate /* 17057c478bd9Sstevel@tonic-gate * Add buffer to allocated-address hash table. 17067c478bd9Sstevel@tonic-gate */ 17077c478bd9Sstevel@tonic-gate buf = bcp->bc_addr; 17087c478bd9Sstevel@tonic-gate hash_bucket = KMEM_HASH(cp, buf); 17097c478bd9Sstevel@tonic-gate bcp->bc_next = *hash_bucket; 17107c478bd9Sstevel@tonic-gate *hash_bucket = bcp; 17117c478bd9Sstevel@tonic-gate if ((cp->cache_flags & (KMF_AUDIT | KMF_BUFTAG)) == KMF_AUDIT) { 17127c478bd9Sstevel@tonic-gate KMEM_AUDIT(kmem_transaction_log, cp, bcp); 17137c478bd9Sstevel@tonic-gate } 17147c478bd9Sstevel@tonic-gate } else { 17157c478bd9Sstevel@tonic-gate buf = KMEM_BUF(cp, bcp); 17167c478bd9Sstevel@tonic-gate } 17177c478bd9Sstevel@tonic-gate 17187c478bd9Sstevel@tonic-gate ASSERT(KMEM_SLAB_MEMBER(sp, buf)); 1719b5fca8f8Stomee return (buf); 1720b5fca8f8Stomee } 1721b5fca8f8Stomee 1722b5fca8f8Stomee /* 1723b5fca8f8Stomee * Allocate a raw (unconstructed) buffer from cp's slab layer. 1724b5fca8f8Stomee */ 1725b5fca8f8Stomee static void * 1726b5fca8f8Stomee kmem_slab_alloc(kmem_cache_t *cp, int kmflag) 1727b5fca8f8Stomee { 1728b5fca8f8Stomee kmem_slab_t *sp; 1729b5fca8f8Stomee void *buf; 17304d4c4c43STom Erickson boolean_t test_destructor; 1731b5fca8f8Stomee 1732b5fca8f8Stomee mutex_enter(&cp->cache_lock); 17334d4c4c43STom Erickson test_destructor = (cp->cache_slab_alloc == 0); 1734b5fca8f8Stomee sp = avl_first(&cp->cache_partial_slabs); 1735b5fca8f8Stomee if (sp == NULL) { 1736b5fca8f8Stomee ASSERT(cp->cache_bufslab == 0); 1737b5fca8f8Stomee 1738b5fca8f8Stomee /* 1739b5fca8f8Stomee * The freelist is empty. Create a new slab. 1740b5fca8f8Stomee */ 1741b5fca8f8Stomee mutex_exit(&cp->cache_lock); 1742b5fca8f8Stomee if ((sp = kmem_slab_create(cp, kmflag)) == NULL) { 1743b5fca8f8Stomee return (NULL); 1744b5fca8f8Stomee } 1745b5fca8f8Stomee mutex_enter(&cp->cache_lock); 1746b5fca8f8Stomee cp->cache_slab_create++; 1747b5fca8f8Stomee if ((cp->cache_buftotal += sp->slab_chunks) > cp->cache_bufmax) 1748b5fca8f8Stomee cp->cache_bufmax = cp->cache_buftotal; 1749b5fca8f8Stomee cp->cache_bufslab += sp->slab_chunks; 1750b5fca8f8Stomee } 17517c478bd9Sstevel@tonic-gate 1752b5fca8f8Stomee buf = kmem_slab_alloc_impl(cp, sp); 1753b5fca8f8Stomee ASSERT((cp->cache_slab_create - cp->cache_slab_destroy) == 1754b5fca8f8Stomee (cp->cache_complete_slab_count + 1755b5fca8f8Stomee avl_numnodes(&cp->cache_partial_slabs) + 1756b5fca8f8Stomee (cp->cache_defrag == NULL ? 0 : cp->cache_defrag->kmd_deadcount))); 17577c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_lock); 17587c478bd9Sstevel@tonic-gate 17594d4c4c43STom Erickson if (test_destructor && cp->cache_destructor != NULL) { 17604d4c4c43STom Erickson /* 17614d4c4c43STom Erickson * On the first kmem_slab_alloc(), assert that it is valid to 17624d4c4c43STom Erickson * call the destructor on a newly constructed object without any 17634d4c4c43STom Erickson * client involvement. 17644d4c4c43STom Erickson */ 17654d4c4c43STom Erickson if ((cp->cache_constructor == NULL) || 17664d4c4c43STom Erickson cp->cache_constructor(buf, cp->cache_private, 17674d4c4c43STom Erickson kmflag) == 0) { 17684d4c4c43STom Erickson cp->cache_destructor(buf, cp->cache_private); 17694d4c4c43STom Erickson } 17704d4c4c43STom Erickson copy_pattern(KMEM_UNINITIALIZED_PATTERN, buf, 17714d4c4c43STom Erickson cp->cache_bufsize); 17724d4c4c43STom Erickson if (cp->cache_flags & KMF_DEADBEEF) { 17734d4c4c43STom Erickson copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 17744d4c4c43STom Erickson } 17754d4c4c43STom Erickson } 17764d4c4c43STom Erickson 17777c478bd9Sstevel@tonic-gate return (buf); 17787c478bd9Sstevel@tonic-gate } 17797c478bd9Sstevel@tonic-gate 1780b5fca8f8Stomee static void kmem_slab_move_yes(kmem_cache_t *, kmem_slab_t *, void *); 1781b5fca8f8Stomee 17827c478bd9Sstevel@tonic-gate /* 17837c478bd9Sstevel@tonic-gate * Free a raw (unconstructed) buffer to cp's slab layer. 17847c478bd9Sstevel@tonic-gate */ 17857c478bd9Sstevel@tonic-gate static void 17867c478bd9Sstevel@tonic-gate kmem_slab_free(kmem_cache_t *cp, void *buf) 17877c478bd9Sstevel@tonic-gate { 17887c478bd9Sstevel@tonic-gate kmem_slab_t *sp; 17897c478bd9Sstevel@tonic-gate kmem_bufctl_t *bcp, **prev_bcpp; 17907c478bd9Sstevel@tonic-gate 17917c478bd9Sstevel@tonic-gate ASSERT(buf != NULL); 17927c478bd9Sstevel@tonic-gate 17937c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_lock); 17947c478bd9Sstevel@tonic-gate cp->cache_slab_free++; 17957c478bd9Sstevel@tonic-gate 17967c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_HASH) { 17977c478bd9Sstevel@tonic-gate /* 17987c478bd9Sstevel@tonic-gate * Look up buffer in allocated-address hash table. 17997c478bd9Sstevel@tonic-gate */ 18007c478bd9Sstevel@tonic-gate prev_bcpp = KMEM_HASH(cp, buf); 18017c478bd9Sstevel@tonic-gate while ((bcp = *prev_bcpp) != NULL) { 18027c478bd9Sstevel@tonic-gate if (bcp->bc_addr == buf) { 18037c478bd9Sstevel@tonic-gate *prev_bcpp = bcp->bc_next; 18047c478bd9Sstevel@tonic-gate sp = bcp->bc_slab; 18057c478bd9Sstevel@tonic-gate break; 18067c478bd9Sstevel@tonic-gate } 18077c478bd9Sstevel@tonic-gate cp->cache_lookup_depth++; 18087c478bd9Sstevel@tonic-gate prev_bcpp = &bcp->bc_next; 18097c478bd9Sstevel@tonic-gate } 18107c478bd9Sstevel@tonic-gate } else { 18117c478bd9Sstevel@tonic-gate bcp = KMEM_BUFCTL(cp, buf); 18127c478bd9Sstevel@tonic-gate sp = KMEM_SLAB(cp, buf); 18137c478bd9Sstevel@tonic-gate } 18147c478bd9Sstevel@tonic-gate 18157c478bd9Sstevel@tonic-gate if (bcp == NULL || sp->slab_cache != cp || !KMEM_SLAB_MEMBER(sp, buf)) { 18167c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_lock); 18177c478bd9Sstevel@tonic-gate kmem_error(KMERR_BADADDR, cp, buf); 18187c478bd9Sstevel@tonic-gate return; 18197c478bd9Sstevel@tonic-gate } 18207c478bd9Sstevel@tonic-gate 1821b5fca8f8Stomee if (KMEM_SLAB_OFFSET(sp, buf) == sp->slab_stuck_offset) { 1822b5fca8f8Stomee /* 1823b5fca8f8Stomee * If this is the buffer that prevented the consolidator from 1824b5fca8f8Stomee * clearing the slab, we can reset the slab flags now that the 1825b5fca8f8Stomee * buffer is freed. (It makes sense to do this in 1826b5fca8f8Stomee * kmem_cache_free(), where the client gives up ownership of the 1827b5fca8f8Stomee * buffer, but on the hot path the test is too expensive.) 1828b5fca8f8Stomee */ 1829b5fca8f8Stomee kmem_slab_move_yes(cp, sp, buf); 1830b5fca8f8Stomee } 1831b5fca8f8Stomee 18327c478bd9Sstevel@tonic-gate if ((cp->cache_flags & (KMF_AUDIT | KMF_BUFTAG)) == KMF_AUDIT) { 18337c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_CONTENTS) 18347c478bd9Sstevel@tonic-gate ((kmem_bufctl_audit_t *)bcp)->bc_contents = 18357c478bd9Sstevel@tonic-gate kmem_log_enter(kmem_content_log, buf, 18369f1b636aStomee cp->cache_contents); 18377c478bd9Sstevel@tonic-gate KMEM_AUDIT(kmem_transaction_log, cp, bcp); 18387c478bd9Sstevel@tonic-gate } 18397c478bd9Sstevel@tonic-gate 18407c478bd9Sstevel@tonic-gate bcp->bc_next = sp->slab_head; 18417c478bd9Sstevel@tonic-gate sp->slab_head = bcp; 18427c478bd9Sstevel@tonic-gate 18439f1b636aStomee cp->cache_bufslab++; 18447c478bd9Sstevel@tonic-gate ASSERT(sp->slab_refcnt >= 1); 1845b5fca8f8Stomee 18467c478bd9Sstevel@tonic-gate if (--sp->slab_refcnt == 0) { 18477c478bd9Sstevel@tonic-gate /* 18487c478bd9Sstevel@tonic-gate * There are no outstanding allocations from this slab, 18497c478bd9Sstevel@tonic-gate * so we can reclaim the memory. 18507c478bd9Sstevel@tonic-gate */ 1851b5fca8f8Stomee if (sp->slab_chunks == 1) { 1852b5fca8f8Stomee list_remove(&cp->cache_complete_slabs, sp); 1853b5fca8f8Stomee cp->cache_complete_slab_count--; 1854b5fca8f8Stomee } else { 1855b5fca8f8Stomee avl_remove(&cp->cache_partial_slabs, sp); 1856b5fca8f8Stomee } 1857b5fca8f8Stomee 18587c478bd9Sstevel@tonic-gate cp->cache_buftotal -= sp->slab_chunks; 18599f1b636aStomee cp->cache_bufslab -= sp->slab_chunks; 1860b5fca8f8Stomee /* 1861b5fca8f8Stomee * Defer releasing the slab to the virtual memory subsystem 1862b5fca8f8Stomee * while there is a pending move callback, since we guarantee 1863b5fca8f8Stomee * that buffers passed to the move callback have only been 1864b5fca8f8Stomee * touched by kmem or by the client itself. Since the memory 1865b5fca8f8Stomee * patterns baddcafe (uninitialized) and deadbeef (freed) both 1866b5fca8f8Stomee * set at least one of the two lowest order bits, the client can 1867b5fca8f8Stomee * test those bits in the move callback to determine whether or 1868b5fca8f8Stomee * not it knows about the buffer (assuming that the client also 1869b5fca8f8Stomee * sets one of those low order bits whenever it frees a buffer). 1870b5fca8f8Stomee */ 1871b5fca8f8Stomee if (cp->cache_defrag == NULL || 1872b5fca8f8Stomee (avl_is_empty(&cp->cache_defrag->kmd_moves_pending) && 1873b5fca8f8Stomee !(sp->slab_flags & KMEM_SLAB_MOVE_PENDING))) { 1874b5fca8f8Stomee cp->cache_slab_destroy++; 1875b5fca8f8Stomee mutex_exit(&cp->cache_lock); 1876b5fca8f8Stomee kmem_slab_destroy(cp, sp); 1877b5fca8f8Stomee } else { 1878b5fca8f8Stomee list_t *deadlist = &cp->cache_defrag->kmd_deadlist; 1879b5fca8f8Stomee /* 1880b5fca8f8Stomee * Slabs are inserted at both ends of the deadlist to 1881b5fca8f8Stomee * distinguish between slabs freed while move callbacks 1882b5fca8f8Stomee * are pending (list head) and a slab freed while the 1883b5fca8f8Stomee * lock is dropped in kmem_move_buffers() (list tail) so 1884b5fca8f8Stomee * that in both cases slab_destroy() is called from the 1885b5fca8f8Stomee * right context. 1886b5fca8f8Stomee */ 1887b5fca8f8Stomee if (sp->slab_flags & KMEM_SLAB_MOVE_PENDING) { 1888b5fca8f8Stomee list_insert_tail(deadlist, sp); 1889b5fca8f8Stomee } else { 1890b5fca8f8Stomee list_insert_head(deadlist, sp); 1891b5fca8f8Stomee } 1892b5fca8f8Stomee cp->cache_defrag->kmd_deadcount++; 1893b5fca8f8Stomee mutex_exit(&cp->cache_lock); 1894b5fca8f8Stomee } 18957c478bd9Sstevel@tonic-gate return; 18967c478bd9Sstevel@tonic-gate } 1897b5fca8f8Stomee 1898b5fca8f8Stomee if (bcp->bc_next == NULL) { 1899b5fca8f8Stomee /* Transition the slab from completely allocated to partial. */ 1900b5fca8f8Stomee ASSERT(sp->slab_refcnt == (sp->slab_chunks - 1)); 1901b5fca8f8Stomee ASSERT(sp->slab_chunks > 1); 1902b5fca8f8Stomee list_remove(&cp->cache_complete_slabs, sp); 1903b5fca8f8Stomee cp->cache_complete_slab_count--; 1904b5fca8f8Stomee avl_add(&cp->cache_partial_slabs, sp); 1905b5fca8f8Stomee } else { 1906b5fca8f8Stomee #ifdef DEBUG 1907b5fca8f8Stomee if (avl_update_gt(&cp->cache_partial_slabs, sp)) { 1908b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_avl_update); 1909b5fca8f8Stomee } else { 1910b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_avl_noupdate); 1911b5fca8f8Stomee } 1912b5fca8f8Stomee #else 1913b5fca8f8Stomee (void) avl_update_gt(&cp->cache_partial_slabs, sp); 1914b5fca8f8Stomee #endif 1915b5fca8f8Stomee } 1916b5fca8f8Stomee 1917b5fca8f8Stomee ASSERT((cp->cache_slab_create - cp->cache_slab_destroy) == 1918b5fca8f8Stomee (cp->cache_complete_slab_count + 1919b5fca8f8Stomee avl_numnodes(&cp->cache_partial_slabs) + 1920b5fca8f8Stomee (cp->cache_defrag == NULL ? 0 : cp->cache_defrag->kmd_deadcount))); 19217c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_lock); 19227c478bd9Sstevel@tonic-gate } 19237c478bd9Sstevel@tonic-gate 1924b5fca8f8Stomee /* 1925b5fca8f8Stomee * Return -1 if kmem_error, 1 if constructor fails, 0 if successful. 1926b5fca8f8Stomee */ 19277c478bd9Sstevel@tonic-gate static int 19287c478bd9Sstevel@tonic-gate kmem_cache_alloc_debug(kmem_cache_t *cp, void *buf, int kmflag, int construct, 19297c478bd9Sstevel@tonic-gate caddr_t caller) 19307c478bd9Sstevel@tonic-gate { 19317c478bd9Sstevel@tonic-gate kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 19327c478bd9Sstevel@tonic-gate kmem_bufctl_audit_t *bcp = (kmem_bufctl_audit_t *)btp->bt_bufctl; 19337c478bd9Sstevel@tonic-gate uint32_t mtbf; 19347c478bd9Sstevel@tonic-gate 19357c478bd9Sstevel@tonic-gate if (btp->bt_bxstat != ((intptr_t)bcp ^ KMEM_BUFTAG_FREE)) { 19367c478bd9Sstevel@tonic-gate kmem_error(KMERR_BADBUFTAG, cp, buf); 19377c478bd9Sstevel@tonic-gate return (-1); 19387c478bd9Sstevel@tonic-gate } 19397c478bd9Sstevel@tonic-gate 19407c478bd9Sstevel@tonic-gate btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_ALLOC; 19417c478bd9Sstevel@tonic-gate 19427c478bd9Sstevel@tonic-gate if ((cp->cache_flags & KMF_HASH) && bcp->bc_addr != buf) { 19437c478bd9Sstevel@tonic-gate kmem_error(KMERR_BADBUFCTL, cp, buf); 19447c478bd9Sstevel@tonic-gate return (-1); 19457c478bd9Sstevel@tonic-gate } 19467c478bd9Sstevel@tonic-gate 19477c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_DEADBEEF) { 19487c478bd9Sstevel@tonic-gate if (!construct && (cp->cache_flags & KMF_LITE)) { 19497c478bd9Sstevel@tonic-gate if (*(uint64_t *)buf != KMEM_FREE_PATTERN) { 19507c478bd9Sstevel@tonic-gate kmem_error(KMERR_MODIFIED, cp, buf); 19517c478bd9Sstevel@tonic-gate return (-1); 19527c478bd9Sstevel@tonic-gate } 19537c478bd9Sstevel@tonic-gate if (cp->cache_constructor != NULL) 19547c478bd9Sstevel@tonic-gate *(uint64_t *)buf = btp->bt_redzone; 19557c478bd9Sstevel@tonic-gate else 19567c478bd9Sstevel@tonic-gate *(uint64_t *)buf = KMEM_UNINITIALIZED_PATTERN; 19577c478bd9Sstevel@tonic-gate } else { 19587c478bd9Sstevel@tonic-gate construct = 1; 19597c478bd9Sstevel@tonic-gate if (verify_and_copy_pattern(KMEM_FREE_PATTERN, 19607c478bd9Sstevel@tonic-gate KMEM_UNINITIALIZED_PATTERN, buf, 19617c478bd9Sstevel@tonic-gate cp->cache_verify)) { 19627c478bd9Sstevel@tonic-gate kmem_error(KMERR_MODIFIED, cp, buf); 19637c478bd9Sstevel@tonic-gate return (-1); 19647c478bd9Sstevel@tonic-gate } 19657c478bd9Sstevel@tonic-gate } 19667c478bd9Sstevel@tonic-gate } 19677c478bd9Sstevel@tonic-gate btp->bt_redzone = KMEM_REDZONE_PATTERN; 19687c478bd9Sstevel@tonic-gate 19697c478bd9Sstevel@tonic-gate if ((mtbf = kmem_mtbf | cp->cache_mtbf) != 0 && 19707c478bd9Sstevel@tonic-gate gethrtime() % mtbf == 0 && 19717c478bd9Sstevel@tonic-gate (kmflag & (KM_NOSLEEP | KM_PANIC)) == KM_NOSLEEP) { 19727c478bd9Sstevel@tonic-gate kmem_log_event(kmem_failure_log, cp, NULL, NULL); 19737c478bd9Sstevel@tonic-gate if (!construct && cp->cache_destructor != NULL) 19747c478bd9Sstevel@tonic-gate cp->cache_destructor(buf, cp->cache_private); 19757c478bd9Sstevel@tonic-gate } else { 19767c478bd9Sstevel@tonic-gate mtbf = 0; 19777c478bd9Sstevel@tonic-gate } 19787c478bd9Sstevel@tonic-gate 19797c478bd9Sstevel@tonic-gate if (mtbf || (construct && cp->cache_constructor != NULL && 19807c478bd9Sstevel@tonic-gate cp->cache_constructor(buf, cp->cache_private, kmflag) != 0)) { 19817c478bd9Sstevel@tonic-gate atomic_add_64(&cp->cache_alloc_fail, 1); 19827c478bd9Sstevel@tonic-gate btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE; 19837c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_DEADBEEF) 19847c478bd9Sstevel@tonic-gate copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 19857c478bd9Sstevel@tonic-gate kmem_slab_free(cp, buf); 1986b5fca8f8Stomee return (1); 19877c478bd9Sstevel@tonic-gate } 19887c478bd9Sstevel@tonic-gate 19897c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_AUDIT) { 19907c478bd9Sstevel@tonic-gate KMEM_AUDIT(kmem_transaction_log, cp, bcp); 19917c478bd9Sstevel@tonic-gate } 19927c478bd9Sstevel@tonic-gate 19937c478bd9Sstevel@tonic-gate if ((cp->cache_flags & KMF_LITE) && 19947c478bd9Sstevel@tonic-gate !(cp->cache_cflags & KMC_KMEM_ALLOC)) { 19957c478bd9Sstevel@tonic-gate KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller); 19967c478bd9Sstevel@tonic-gate } 19977c478bd9Sstevel@tonic-gate 19987c478bd9Sstevel@tonic-gate return (0); 19997c478bd9Sstevel@tonic-gate } 20007c478bd9Sstevel@tonic-gate 20017c478bd9Sstevel@tonic-gate static int 20027c478bd9Sstevel@tonic-gate kmem_cache_free_debug(kmem_cache_t *cp, void *buf, caddr_t caller) 20037c478bd9Sstevel@tonic-gate { 20047c478bd9Sstevel@tonic-gate kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 20057c478bd9Sstevel@tonic-gate kmem_bufctl_audit_t *bcp = (kmem_bufctl_audit_t *)btp->bt_bufctl; 20067c478bd9Sstevel@tonic-gate kmem_slab_t *sp; 20077c478bd9Sstevel@tonic-gate 20087c478bd9Sstevel@tonic-gate if (btp->bt_bxstat != ((intptr_t)bcp ^ KMEM_BUFTAG_ALLOC)) { 20097c478bd9Sstevel@tonic-gate if (btp->bt_bxstat == ((intptr_t)bcp ^ KMEM_BUFTAG_FREE)) { 20107c478bd9Sstevel@tonic-gate kmem_error(KMERR_DUPFREE, cp, buf); 20117c478bd9Sstevel@tonic-gate return (-1); 20127c478bd9Sstevel@tonic-gate } 20137c478bd9Sstevel@tonic-gate sp = kmem_findslab(cp, buf); 20147c478bd9Sstevel@tonic-gate if (sp == NULL || sp->slab_cache != cp) 20157c478bd9Sstevel@tonic-gate kmem_error(KMERR_BADADDR, cp, buf); 20167c478bd9Sstevel@tonic-gate else 20177c478bd9Sstevel@tonic-gate kmem_error(KMERR_REDZONE, cp, buf); 20187c478bd9Sstevel@tonic-gate return (-1); 20197c478bd9Sstevel@tonic-gate } 20207c478bd9Sstevel@tonic-gate 20217c478bd9Sstevel@tonic-gate btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE; 20227c478bd9Sstevel@tonic-gate 20237c478bd9Sstevel@tonic-gate if ((cp->cache_flags & KMF_HASH) && bcp->bc_addr != buf) { 20247c478bd9Sstevel@tonic-gate kmem_error(KMERR_BADBUFCTL, cp, buf); 20257c478bd9Sstevel@tonic-gate return (-1); 20267c478bd9Sstevel@tonic-gate } 20277c478bd9Sstevel@tonic-gate 20287c478bd9Sstevel@tonic-gate if (btp->bt_redzone != KMEM_REDZONE_PATTERN) { 20297c478bd9Sstevel@tonic-gate kmem_error(KMERR_REDZONE, cp, buf); 20307c478bd9Sstevel@tonic-gate return (-1); 20317c478bd9Sstevel@tonic-gate } 20327c478bd9Sstevel@tonic-gate 20337c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_AUDIT) { 20347c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_CONTENTS) 20357c478bd9Sstevel@tonic-gate bcp->bc_contents = kmem_log_enter(kmem_content_log, 20367c478bd9Sstevel@tonic-gate buf, cp->cache_contents); 20377c478bd9Sstevel@tonic-gate KMEM_AUDIT(kmem_transaction_log, cp, bcp); 20387c478bd9Sstevel@tonic-gate } 20397c478bd9Sstevel@tonic-gate 20407c478bd9Sstevel@tonic-gate if ((cp->cache_flags & KMF_LITE) && 20417c478bd9Sstevel@tonic-gate !(cp->cache_cflags & KMC_KMEM_ALLOC)) { 20427c478bd9Sstevel@tonic-gate KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller); 20437c478bd9Sstevel@tonic-gate } 20447c478bd9Sstevel@tonic-gate 20457c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_DEADBEEF) { 20467c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_LITE) 20477c478bd9Sstevel@tonic-gate btp->bt_redzone = *(uint64_t *)buf; 20487c478bd9Sstevel@tonic-gate else if (cp->cache_destructor != NULL) 20497c478bd9Sstevel@tonic-gate cp->cache_destructor(buf, cp->cache_private); 20507c478bd9Sstevel@tonic-gate 20517c478bd9Sstevel@tonic-gate copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 20527c478bd9Sstevel@tonic-gate } 20537c478bd9Sstevel@tonic-gate 20547c478bd9Sstevel@tonic-gate return (0); 20557c478bd9Sstevel@tonic-gate } 20567c478bd9Sstevel@tonic-gate 20577c478bd9Sstevel@tonic-gate /* 20587c478bd9Sstevel@tonic-gate * Free each object in magazine mp to cp's slab layer, and free mp itself. 20597c478bd9Sstevel@tonic-gate */ 20607c478bd9Sstevel@tonic-gate static void 20617c478bd9Sstevel@tonic-gate kmem_magazine_destroy(kmem_cache_t *cp, kmem_magazine_t *mp, int nrounds) 20627c478bd9Sstevel@tonic-gate { 20637c478bd9Sstevel@tonic-gate int round; 20647c478bd9Sstevel@tonic-gate 2065b5fca8f8Stomee ASSERT(!list_link_active(&cp->cache_link) || 2066b5fca8f8Stomee taskq_member(kmem_taskq, curthread)); 20677c478bd9Sstevel@tonic-gate 20687c478bd9Sstevel@tonic-gate for (round = 0; round < nrounds; round++) { 20697c478bd9Sstevel@tonic-gate void *buf = mp->mag_round[round]; 20707c478bd9Sstevel@tonic-gate 20717c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_DEADBEEF) { 20727c478bd9Sstevel@tonic-gate if (verify_pattern(KMEM_FREE_PATTERN, buf, 20737c478bd9Sstevel@tonic-gate cp->cache_verify) != NULL) { 20747c478bd9Sstevel@tonic-gate kmem_error(KMERR_MODIFIED, cp, buf); 20757c478bd9Sstevel@tonic-gate continue; 20767c478bd9Sstevel@tonic-gate } 20777c478bd9Sstevel@tonic-gate if ((cp->cache_flags & KMF_LITE) && 20787c478bd9Sstevel@tonic-gate cp->cache_destructor != NULL) { 20797c478bd9Sstevel@tonic-gate kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 20807c478bd9Sstevel@tonic-gate *(uint64_t *)buf = btp->bt_redzone; 20817c478bd9Sstevel@tonic-gate cp->cache_destructor(buf, cp->cache_private); 20827c478bd9Sstevel@tonic-gate *(uint64_t *)buf = KMEM_FREE_PATTERN; 20837c478bd9Sstevel@tonic-gate } 20847c478bd9Sstevel@tonic-gate } else if (cp->cache_destructor != NULL) { 20857c478bd9Sstevel@tonic-gate cp->cache_destructor(buf, cp->cache_private); 20867c478bd9Sstevel@tonic-gate } 20877c478bd9Sstevel@tonic-gate 20887c478bd9Sstevel@tonic-gate kmem_slab_free(cp, buf); 20897c478bd9Sstevel@tonic-gate } 20907c478bd9Sstevel@tonic-gate ASSERT(KMEM_MAGAZINE_VALID(cp, mp)); 20917c478bd9Sstevel@tonic-gate kmem_cache_free(cp->cache_magtype->mt_cache, mp); 20927c478bd9Sstevel@tonic-gate } 20937c478bd9Sstevel@tonic-gate 20947c478bd9Sstevel@tonic-gate /* 20957c478bd9Sstevel@tonic-gate * Allocate a magazine from the depot. 20967c478bd9Sstevel@tonic-gate */ 20977c478bd9Sstevel@tonic-gate static kmem_magazine_t * 20987c478bd9Sstevel@tonic-gate kmem_depot_alloc(kmem_cache_t *cp, kmem_maglist_t *mlp) 20997c478bd9Sstevel@tonic-gate { 21007c478bd9Sstevel@tonic-gate kmem_magazine_t *mp; 21017c478bd9Sstevel@tonic-gate 21027c478bd9Sstevel@tonic-gate /* 21037c478bd9Sstevel@tonic-gate * If we can't get the depot lock without contention, 21047c478bd9Sstevel@tonic-gate * update our contention count. We use the depot 21057c478bd9Sstevel@tonic-gate * contention rate to determine whether we need to 21067c478bd9Sstevel@tonic-gate * increase the magazine size for better scalability. 21077c478bd9Sstevel@tonic-gate */ 21087c478bd9Sstevel@tonic-gate if (!mutex_tryenter(&cp->cache_depot_lock)) { 21097c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_depot_lock); 21107c478bd9Sstevel@tonic-gate cp->cache_depot_contention++; 21117c478bd9Sstevel@tonic-gate } 21127c478bd9Sstevel@tonic-gate 21137c478bd9Sstevel@tonic-gate if ((mp = mlp->ml_list) != NULL) { 21147c478bd9Sstevel@tonic-gate ASSERT(KMEM_MAGAZINE_VALID(cp, mp)); 21157c478bd9Sstevel@tonic-gate mlp->ml_list = mp->mag_next; 21167c478bd9Sstevel@tonic-gate if (--mlp->ml_total < mlp->ml_min) 21177c478bd9Sstevel@tonic-gate mlp->ml_min = mlp->ml_total; 21187c478bd9Sstevel@tonic-gate mlp->ml_alloc++; 21197c478bd9Sstevel@tonic-gate } 21207c478bd9Sstevel@tonic-gate 21217c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_depot_lock); 21227c478bd9Sstevel@tonic-gate 21237c478bd9Sstevel@tonic-gate return (mp); 21247c478bd9Sstevel@tonic-gate } 21257c478bd9Sstevel@tonic-gate 21267c478bd9Sstevel@tonic-gate /* 21277c478bd9Sstevel@tonic-gate * Free a magazine to the depot. 21287c478bd9Sstevel@tonic-gate */ 21297c478bd9Sstevel@tonic-gate static void 21307c478bd9Sstevel@tonic-gate kmem_depot_free(kmem_cache_t *cp, kmem_maglist_t *mlp, kmem_magazine_t *mp) 21317c478bd9Sstevel@tonic-gate { 21327c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_depot_lock); 21337c478bd9Sstevel@tonic-gate ASSERT(KMEM_MAGAZINE_VALID(cp, mp)); 21347c478bd9Sstevel@tonic-gate mp->mag_next = mlp->ml_list; 21357c478bd9Sstevel@tonic-gate mlp->ml_list = mp; 21367c478bd9Sstevel@tonic-gate mlp->ml_total++; 21377c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_depot_lock); 21387c478bd9Sstevel@tonic-gate } 21397c478bd9Sstevel@tonic-gate 21407c478bd9Sstevel@tonic-gate /* 21417c478bd9Sstevel@tonic-gate * Update the working set statistics for cp's depot. 21427c478bd9Sstevel@tonic-gate */ 21437c478bd9Sstevel@tonic-gate static void 21447c478bd9Sstevel@tonic-gate kmem_depot_ws_update(kmem_cache_t *cp) 21457c478bd9Sstevel@tonic-gate { 21467c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_depot_lock); 21477c478bd9Sstevel@tonic-gate cp->cache_full.ml_reaplimit = cp->cache_full.ml_min; 21487c478bd9Sstevel@tonic-gate cp->cache_full.ml_min = cp->cache_full.ml_total; 21497c478bd9Sstevel@tonic-gate cp->cache_empty.ml_reaplimit = cp->cache_empty.ml_min; 21507c478bd9Sstevel@tonic-gate cp->cache_empty.ml_min = cp->cache_empty.ml_total; 21517c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_depot_lock); 21527c478bd9Sstevel@tonic-gate } 21537c478bd9Sstevel@tonic-gate 21547c478bd9Sstevel@tonic-gate /* 21557c478bd9Sstevel@tonic-gate * Reap all magazines that have fallen out of the depot's working set. 21567c478bd9Sstevel@tonic-gate */ 21577c478bd9Sstevel@tonic-gate static void 21587c478bd9Sstevel@tonic-gate kmem_depot_ws_reap(kmem_cache_t *cp) 21597c478bd9Sstevel@tonic-gate { 21607c478bd9Sstevel@tonic-gate long reap; 21617c478bd9Sstevel@tonic-gate kmem_magazine_t *mp; 21627c478bd9Sstevel@tonic-gate 2163b5fca8f8Stomee ASSERT(!list_link_active(&cp->cache_link) || 2164b5fca8f8Stomee taskq_member(kmem_taskq, curthread)); 21657c478bd9Sstevel@tonic-gate 21667c478bd9Sstevel@tonic-gate reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min); 21677c478bd9Sstevel@tonic-gate while (reap-- && (mp = kmem_depot_alloc(cp, &cp->cache_full)) != NULL) 21687c478bd9Sstevel@tonic-gate kmem_magazine_destroy(cp, mp, cp->cache_magtype->mt_magsize); 21697c478bd9Sstevel@tonic-gate 21707c478bd9Sstevel@tonic-gate reap = MIN(cp->cache_empty.ml_reaplimit, cp->cache_empty.ml_min); 21717c478bd9Sstevel@tonic-gate while (reap-- && (mp = kmem_depot_alloc(cp, &cp->cache_empty)) != NULL) 21727c478bd9Sstevel@tonic-gate kmem_magazine_destroy(cp, mp, 0); 21737c478bd9Sstevel@tonic-gate } 21747c478bd9Sstevel@tonic-gate 21757c478bd9Sstevel@tonic-gate static void 21767c478bd9Sstevel@tonic-gate kmem_cpu_reload(kmem_cpu_cache_t *ccp, kmem_magazine_t *mp, int rounds) 21777c478bd9Sstevel@tonic-gate { 21787c478bd9Sstevel@tonic-gate ASSERT((ccp->cc_loaded == NULL && ccp->cc_rounds == -1) || 21797c478bd9Sstevel@tonic-gate (ccp->cc_loaded && ccp->cc_rounds + rounds == ccp->cc_magsize)); 21807c478bd9Sstevel@tonic-gate ASSERT(ccp->cc_magsize > 0); 21817c478bd9Sstevel@tonic-gate 21827c478bd9Sstevel@tonic-gate ccp->cc_ploaded = ccp->cc_loaded; 21837c478bd9Sstevel@tonic-gate ccp->cc_prounds = ccp->cc_rounds; 21847c478bd9Sstevel@tonic-gate ccp->cc_loaded = mp; 21857c478bd9Sstevel@tonic-gate ccp->cc_rounds = rounds; 21867c478bd9Sstevel@tonic-gate } 21877c478bd9Sstevel@tonic-gate 21887c478bd9Sstevel@tonic-gate /* 21897c478bd9Sstevel@tonic-gate * Allocate a constructed object from cache cp. 21907c478bd9Sstevel@tonic-gate */ 21917c478bd9Sstevel@tonic-gate void * 21927c478bd9Sstevel@tonic-gate kmem_cache_alloc(kmem_cache_t *cp, int kmflag) 21937c478bd9Sstevel@tonic-gate { 21947c478bd9Sstevel@tonic-gate kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp); 21957c478bd9Sstevel@tonic-gate kmem_magazine_t *fmp; 21967c478bd9Sstevel@tonic-gate void *buf; 21977c478bd9Sstevel@tonic-gate 21987c478bd9Sstevel@tonic-gate mutex_enter(&ccp->cc_lock); 21997c478bd9Sstevel@tonic-gate for (;;) { 22007c478bd9Sstevel@tonic-gate /* 22017c478bd9Sstevel@tonic-gate * If there's an object available in the current CPU's 22027c478bd9Sstevel@tonic-gate * loaded magazine, just take it and return. 22037c478bd9Sstevel@tonic-gate */ 22047c478bd9Sstevel@tonic-gate if (ccp->cc_rounds > 0) { 22057c478bd9Sstevel@tonic-gate buf = ccp->cc_loaded->mag_round[--ccp->cc_rounds]; 22067c478bd9Sstevel@tonic-gate ccp->cc_alloc++; 22077c478bd9Sstevel@tonic-gate mutex_exit(&ccp->cc_lock); 22087c478bd9Sstevel@tonic-gate if ((ccp->cc_flags & KMF_BUFTAG) && 22097c478bd9Sstevel@tonic-gate kmem_cache_alloc_debug(cp, buf, kmflag, 0, 2210b5fca8f8Stomee caller()) != 0) { 22117c478bd9Sstevel@tonic-gate if (kmflag & KM_NOSLEEP) 22127c478bd9Sstevel@tonic-gate return (NULL); 22137c478bd9Sstevel@tonic-gate mutex_enter(&ccp->cc_lock); 22147c478bd9Sstevel@tonic-gate continue; 22157c478bd9Sstevel@tonic-gate } 22167c478bd9Sstevel@tonic-gate return (buf); 22177c478bd9Sstevel@tonic-gate } 22187c478bd9Sstevel@tonic-gate 22197c478bd9Sstevel@tonic-gate /* 22207c478bd9Sstevel@tonic-gate * The loaded magazine is empty. If the previously loaded 22217c478bd9Sstevel@tonic-gate * magazine was full, exchange them and try again. 22227c478bd9Sstevel@tonic-gate */ 22237c478bd9Sstevel@tonic-gate if (ccp->cc_prounds > 0) { 22247c478bd9Sstevel@tonic-gate kmem_cpu_reload(ccp, ccp->cc_ploaded, ccp->cc_prounds); 22257c478bd9Sstevel@tonic-gate continue; 22267c478bd9Sstevel@tonic-gate } 22277c478bd9Sstevel@tonic-gate 22287c478bd9Sstevel@tonic-gate /* 22297c478bd9Sstevel@tonic-gate * If the magazine layer is disabled, break out now. 22307c478bd9Sstevel@tonic-gate */ 22317c478bd9Sstevel@tonic-gate if (ccp->cc_magsize == 0) 22327c478bd9Sstevel@tonic-gate break; 22337c478bd9Sstevel@tonic-gate 22347c478bd9Sstevel@tonic-gate /* 22357c478bd9Sstevel@tonic-gate * Try to get a full magazine from the depot. 22367c478bd9Sstevel@tonic-gate */ 22377c478bd9Sstevel@tonic-gate fmp = kmem_depot_alloc(cp, &cp->cache_full); 22387c478bd9Sstevel@tonic-gate if (fmp != NULL) { 22397c478bd9Sstevel@tonic-gate if (ccp->cc_ploaded != NULL) 22407c478bd9Sstevel@tonic-gate kmem_depot_free(cp, &cp->cache_empty, 22417c478bd9Sstevel@tonic-gate ccp->cc_ploaded); 22427c478bd9Sstevel@tonic-gate kmem_cpu_reload(ccp, fmp, ccp->cc_magsize); 22437c478bd9Sstevel@tonic-gate continue; 22447c478bd9Sstevel@tonic-gate } 22457c478bd9Sstevel@tonic-gate 22467c478bd9Sstevel@tonic-gate /* 22477c478bd9Sstevel@tonic-gate * There are no full magazines in the depot, 22487c478bd9Sstevel@tonic-gate * so fall through to the slab layer. 22497c478bd9Sstevel@tonic-gate */ 22507c478bd9Sstevel@tonic-gate break; 22517c478bd9Sstevel@tonic-gate } 22527c478bd9Sstevel@tonic-gate mutex_exit(&ccp->cc_lock); 22537c478bd9Sstevel@tonic-gate 22547c478bd9Sstevel@tonic-gate /* 22557c478bd9Sstevel@tonic-gate * We couldn't allocate a constructed object from the magazine layer, 22567c478bd9Sstevel@tonic-gate * so get a raw buffer from the slab layer and apply its constructor. 22577c478bd9Sstevel@tonic-gate */ 22587c478bd9Sstevel@tonic-gate buf = kmem_slab_alloc(cp, kmflag); 22597c478bd9Sstevel@tonic-gate 22607c478bd9Sstevel@tonic-gate if (buf == NULL) 22617c478bd9Sstevel@tonic-gate return (NULL); 22627c478bd9Sstevel@tonic-gate 22637c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_BUFTAG) { 22647c478bd9Sstevel@tonic-gate /* 22657c478bd9Sstevel@tonic-gate * Make kmem_cache_alloc_debug() apply the constructor for us. 22667c478bd9Sstevel@tonic-gate */ 2267b5fca8f8Stomee int rc = kmem_cache_alloc_debug(cp, buf, kmflag, 1, caller()); 2268b5fca8f8Stomee if (rc != 0) { 22697c478bd9Sstevel@tonic-gate if (kmflag & KM_NOSLEEP) 22707c478bd9Sstevel@tonic-gate return (NULL); 22717c478bd9Sstevel@tonic-gate /* 22727c478bd9Sstevel@tonic-gate * kmem_cache_alloc_debug() detected corruption 2273b5fca8f8Stomee * but didn't panic (kmem_panic <= 0). We should not be 2274b5fca8f8Stomee * here because the constructor failed (indicated by a 2275b5fca8f8Stomee * return code of 1). Try again. 22767c478bd9Sstevel@tonic-gate */ 2277b5fca8f8Stomee ASSERT(rc == -1); 22787c478bd9Sstevel@tonic-gate return (kmem_cache_alloc(cp, kmflag)); 22797c478bd9Sstevel@tonic-gate } 22807c478bd9Sstevel@tonic-gate return (buf); 22817c478bd9Sstevel@tonic-gate } 22827c478bd9Sstevel@tonic-gate 22837c478bd9Sstevel@tonic-gate if (cp->cache_constructor != NULL && 22847c478bd9Sstevel@tonic-gate cp->cache_constructor(buf, cp->cache_private, kmflag) != 0) { 22857c478bd9Sstevel@tonic-gate atomic_add_64(&cp->cache_alloc_fail, 1); 22867c478bd9Sstevel@tonic-gate kmem_slab_free(cp, buf); 22877c478bd9Sstevel@tonic-gate return (NULL); 22887c478bd9Sstevel@tonic-gate } 22897c478bd9Sstevel@tonic-gate 22907c478bd9Sstevel@tonic-gate return (buf); 22917c478bd9Sstevel@tonic-gate } 22927c478bd9Sstevel@tonic-gate 22937c478bd9Sstevel@tonic-gate /* 2294b5fca8f8Stomee * The freed argument tells whether or not kmem_cache_free_debug() has already 2295b5fca8f8Stomee * been called so that we can avoid the duplicate free error. For example, a 2296b5fca8f8Stomee * buffer on a magazine has already been freed by the client but is still 2297b5fca8f8Stomee * constructed. 22987c478bd9Sstevel@tonic-gate */ 2299b5fca8f8Stomee static void 2300b5fca8f8Stomee kmem_slab_free_constructed(kmem_cache_t *cp, void *buf, boolean_t freed) 23017c478bd9Sstevel@tonic-gate { 2302b5fca8f8Stomee if (!freed && (cp->cache_flags & KMF_BUFTAG)) 23037c478bd9Sstevel@tonic-gate if (kmem_cache_free_debug(cp, buf, caller()) == -1) 23047c478bd9Sstevel@tonic-gate return; 23057c478bd9Sstevel@tonic-gate 2306b5fca8f8Stomee /* 2307b5fca8f8Stomee * Note that if KMF_DEADBEEF is in effect and KMF_LITE is not, 2308b5fca8f8Stomee * kmem_cache_free_debug() will have already applied the destructor. 2309b5fca8f8Stomee */ 2310b5fca8f8Stomee if ((cp->cache_flags & (KMF_DEADBEEF | KMF_LITE)) != KMF_DEADBEEF && 2311b5fca8f8Stomee cp->cache_destructor != NULL) { 2312b5fca8f8Stomee if (cp->cache_flags & KMF_DEADBEEF) { /* KMF_LITE implied */ 2313b5fca8f8Stomee kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2314b5fca8f8Stomee *(uint64_t *)buf = btp->bt_redzone; 2315b5fca8f8Stomee cp->cache_destructor(buf, cp->cache_private); 2316b5fca8f8Stomee *(uint64_t *)buf = KMEM_FREE_PATTERN; 2317b5fca8f8Stomee } else { 2318b5fca8f8Stomee cp->cache_destructor(buf, cp->cache_private); 2319b5fca8f8Stomee } 2320b5fca8f8Stomee } 2321b5fca8f8Stomee 2322b5fca8f8Stomee kmem_slab_free(cp, buf); 2323b5fca8f8Stomee } 2324b5fca8f8Stomee 2325b5fca8f8Stomee /* 2326b5fca8f8Stomee * Free a constructed object to cache cp. 2327b5fca8f8Stomee */ 2328b5fca8f8Stomee void 2329b5fca8f8Stomee kmem_cache_free(kmem_cache_t *cp, void *buf) 2330b5fca8f8Stomee { 2331b5fca8f8Stomee kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp); 2332b5fca8f8Stomee kmem_magazine_t *emp; 2333b5fca8f8Stomee kmem_magtype_t *mtp; 2334b5fca8f8Stomee 2335b5fca8f8Stomee /* 2336b5fca8f8Stomee * The client must not free either of the buffers passed to the move 2337b5fca8f8Stomee * callback function. 2338b5fca8f8Stomee */ 2339b5fca8f8Stomee ASSERT(cp->cache_defrag == NULL || 2340b5fca8f8Stomee cp->cache_defrag->kmd_thread != curthread || 2341b5fca8f8Stomee (buf != cp->cache_defrag->kmd_from_buf && 2342b5fca8f8Stomee buf != cp->cache_defrag->kmd_to_buf)); 2343b5fca8f8Stomee 2344b5fca8f8Stomee if (ccp->cc_flags & KMF_BUFTAG) 2345b5fca8f8Stomee if (kmem_cache_free_debug(cp, buf, caller()) == -1) 2346b5fca8f8Stomee return; 2347b5fca8f8Stomee 2348b5fca8f8Stomee mutex_enter(&ccp->cc_lock); 2349b5fca8f8Stomee for (;;) { 2350b5fca8f8Stomee /* 2351b5fca8f8Stomee * If there's a slot available in the current CPU's 2352b5fca8f8Stomee * loaded magazine, just put the object there and return. 2353b5fca8f8Stomee */ 2354b5fca8f8Stomee if ((uint_t)ccp->cc_rounds < ccp->cc_magsize) { 2355b5fca8f8Stomee ccp->cc_loaded->mag_round[ccp->cc_rounds++] = buf; 2356b5fca8f8Stomee ccp->cc_free++; 2357b5fca8f8Stomee mutex_exit(&ccp->cc_lock); 2358b5fca8f8Stomee return; 2359b5fca8f8Stomee } 2360b5fca8f8Stomee 23617c478bd9Sstevel@tonic-gate /* 23627c478bd9Sstevel@tonic-gate * The loaded magazine is full. If the previously loaded 23637c478bd9Sstevel@tonic-gate * magazine was empty, exchange them and try again. 23647c478bd9Sstevel@tonic-gate */ 23657c478bd9Sstevel@tonic-gate if (ccp->cc_prounds == 0) { 23667c478bd9Sstevel@tonic-gate kmem_cpu_reload(ccp, ccp->cc_ploaded, ccp->cc_prounds); 23677c478bd9Sstevel@tonic-gate continue; 23687c478bd9Sstevel@tonic-gate } 23697c478bd9Sstevel@tonic-gate 23707c478bd9Sstevel@tonic-gate /* 23717c478bd9Sstevel@tonic-gate * If the magazine layer is disabled, break out now. 23727c478bd9Sstevel@tonic-gate */ 23737c478bd9Sstevel@tonic-gate if (ccp->cc_magsize == 0) 23747c478bd9Sstevel@tonic-gate break; 23757c478bd9Sstevel@tonic-gate 23767c478bd9Sstevel@tonic-gate /* 23777c478bd9Sstevel@tonic-gate * Try to get an empty magazine from the depot. 23787c478bd9Sstevel@tonic-gate */ 23797c478bd9Sstevel@tonic-gate emp = kmem_depot_alloc(cp, &cp->cache_empty); 23807c478bd9Sstevel@tonic-gate if (emp != NULL) { 23817c478bd9Sstevel@tonic-gate if (ccp->cc_ploaded != NULL) 23827c478bd9Sstevel@tonic-gate kmem_depot_free(cp, &cp->cache_full, 23837c478bd9Sstevel@tonic-gate ccp->cc_ploaded); 23847c478bd9Sstevel@tonic-gate kmem_cpu_reload(ccp, emp, 0); 23857c478bd9Sstevel@tonic-gate continue; 23867c478bd9Sstevel@tonic-gate } 23877c478bd9Sstevel@tonic-gate 23887c478bd9Sstevel@tonic-gate /* 23897c478bd9Sstevel@tonic-gate * There are no empty magazines in the depot, 23907c478bd9Sstevel@tonic-gate * so try to allocate a new one. We must drop all locks 23917c478bd9Sstevel@tonic-gate * across kmem_cache_alloc() because lower layers may 23927c478bd9Sstevel@tonic-gate * attempt to allocate from this cache. 23937c478bd9Sstevel@tonic-gate */ 23947c478bd9Sstevel@tonic-gate mtp = cp->cache_magtype; 23957c478bd9Sstevel@tonic-gate mutex_exit(&ccp->cc_lock); 23967c478bd9Sstevel@tonic-gate emp = kmem_cache_alloc(mtp->mt_cache, KM_NOSLEEP); 23977c478bd9Sstevel@tonic-gate mutex_enter(&ccp->cc_lock); 23987c478bd9Sstevel@tonic-gate 23997c478bd9Sstevel@tonic-gate if (emp != NULL) { 24007c478bd9Sstevel@tonic-gate /* 24017c478bd9Sstevel@tonic-gate * We successfully allocated an empty magazine. 24027c478bd9Sstevel@tonic-gate * However, we had to drop ccp->cc_lock to do it, 24037c478bd9Sstevel@tonic-gate * so the cache's magazine size may have changed. 24047c478bd9Sstevel@tonic-gate * If so, free the magazine and try again. 24057c478bd9Sstevel@tonic-gate */ 24067c478bd9Sstevel@tonic-gate if (ccp->cc_magsize != mtp->mt_magsize) { 24077c478bd9Sstevel@tonic-gate mutex_exit(&ccp->cc_lock); 24087c478bd9Sstevel@tonic-gate kmem_cache_free(mtp->mt_cache, emp); 24097c478bd9Sstevel@tonic-gate mutex_enter(&ccp->cc_lock); 24107c478bd9Sstevel@tonic-gate continue; 24117c478bd9Sstevel@tonic-gate } 24127c478bd9Sstevel@tonic-gate 24137c478bd9Sstevel@tonic-gate /* 24147c478bd9Sstevel@tonic-gate * We got a magazine of the right size. Add it to 24157c478bd9Sstevel@tonic-gate * the depot and try the whole dance again. 24167c478bd9Sstevel@tonic-gate */ 24177c478bd9Sstevel@tonic-gate kmem_depot_free(cp, &cp->cache_empty, emp); 24187c478bd9Sstevel@tonic-gate continue; 24197c478bd9Sstevel@tonic-gate } 24207c478bd9Sstevel@tonic-gate 24217c478bd9Sstevel@tonic-gate /* 24227c478bd9Sstevel@tonic-gate * We couldn't allocate an empty magazine, 24237c478bd9Sstevel@tonic-gate * so fall through to the slab layer. 24247c478bd9Sstevel@tonic-gate */ 24257c478bd9Sstevel@tonic-gate break; 24267c478bd9Sstevel@tonic-gate } 24277c478bd9Sstevel@tonic-gate mutex_exit(&ccp->cc_lock); 24287c478bd9Sstevel@tonic-gate 24297c478bd9Sstevel@tonic-gate /* 24307c478bd9Sstevel@tonic-gate * We couldn't free our constructed object to the magazine layer, 24317c478bd9Sstevel@tonic-gate * so apply its destructor and free it to the slab layer. 24327c478bd9Sstevel@tonic-gate */ 2433b5fca8f8Stomee kmem_slab_free_constructed(cp, buf, B_TRUE); 24347c478bd9Sstevel@tonic-gate } 24357c478bd9Sstevel@tonic-gate 24367c478bd9Sstevel@tonic-gate void * 24377c478bd9Sstevel@tonic-gate kmem_zalloc(size_t size, int kmflag) 24387c478bd9Sstevel@tonic-gate { 2439dce01e3fSJonathan W Adams size_t index; 24407c478bd9Sstevel@tonic-gate void *buf; 24417c478bd9Sstevel@tonic-gate 2442dce01e3fSJonathan W Adams if ((index = ((size - 1) >> KMEM_ALIGN_SHIFT)) < KMEM_ALLOC_TABLE_MAX) { 24437c478bd9Sstevel@tonic-gate kmem_cache_t *cp = kmem_alloc_table[index]; 24447c478bd9Sstevel@tonic-gate buf = kmem_cache_alloc(cp, kmflag); 24457c478bd9Sstevel@tonic-gate if (buf != NULL) { 24467c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_BUFTAG) { 24477c478bd9Sstevel@tonic-gate kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 24487c478bd9Sstevel@tonic-gate ((uint8_t *)buf)[size] = KMEM_REDZONE_BYTE; 24497c478bd9Sstevel@tonic-gate ((uint32_t *)btp)[1] = KMEM_SIZE_ENCODE(size); 24507c478bd9Sstevel@tonic-gate 24517c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_LITE) { 24527c478bd9Sstevel@tonic-gate KMEM_BUFTAG_LITE_ENTER(btp, 24537c478bd9Sstevel@tonic-gate kmem_lite_count, caller()); 24547c478bd9Sstevel@tonic-gate } 24557c478bd9Sstevel@tonic-gate } 24567c478bd9Sstevel@tonic-gate bzero(buf, size); 24577c478bd9Sstevel@tonic-gate } 24587c478bd9Sstevel@tonic-gate } else { 24597c478bd9Sstevel@tonic-gate buf = kmem_alloc(size, kmflag); 24607c478bd9Sstevel@tonic-gate if (buf != NULL) 24617c478bd9Sstevel@tonic-gate bzero(buf, size); 24627c478bd9Sstevel@tonic-gate } 24637c478bd9Sstevel@tonic-gate return (buf); 24647c478bd9Sstevel@tonic-gate } 24657c478bd9Sstevel@tonic-gate 24667c478bd9Sstevel@tonic-gate void * 24677c478bd9Sstevel@tonic-gate kmem_alloc(size_t size, int kmflag) 24687c478bd9Sstevel@tonic-gate { 2469dce01e3fSJonathan W Adams size_t index; 2470dce01e3fSJonathan W Adams kmem_cache_t *cp; 24717c478bd9Sstevel@tonic-gate void *buf; 24727c478bd9Sstevel@tonic-gate 2473dce01e3fSJonathan W Adams if ((index = ((size - 1) >> KMEM_ALIGN_SHIFT)) < KMEM_ALLOC_TABLE_MAX) { 2474dce01e3fSJonathan W Adams cp = kmem_alloc_table[index]; 2475dce01e3fSJonathan W Adams /* fall through to kmem_cache_alloc() */ 24767c478bd9Sstevel@tonic-gate 2477dce01e3fSJonathan W Adams } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) < 2478dce01e3fSJonathan W Adams kmem_big_alloc_table_max) { 2479dce01e3fSJonathan W Adams cp = kmem_big_alloc_table[index]; 2480dce01e3fSJonathan W Adams /* fall through to kmem_cache_alloc() */ 2481dce01e3fSJonathan W Adams 2482dce01e3fSJonathan W Adams } else { 2483dce01e3fSJonathan W Adams if (size == 0) 2484dce01e3fSJonathan W Adams return (NULL); 2485dce01e3fSJonathan W Adams 2486dce01e3fSJonathan W Adams buf = vmem_alloc(kmem_oversize_arena, size, 2487dce01e3fSJonathan W Adams kmflag & KM_VMFLAGS); 2488dce01e3fSJonathan W Adams if (buf == NULL) 2489dce01e3fSJonathan W Adams kmem_log_event(kmem_failure_log, NULL, NULL, 2490dce01e3fSJonathan W Adams (void *)size); 24917c478bd9Sstevel@tonic-gate return (buf); 24927c478bd9Sstevel@tonic-gate } 2493dce01e3fSJonathan W Adams 2494dce01e3fSJonathan W Adams buf = kmem_cache_alloc(cp, kmflag); 2495dce01e3fSJonathan W Adams if ((cp->cache_flags & KMF_BUFTAG) && buf != NULL) { 2496dce01e3fSJonathan W Adams kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2497dce01e3fSJonathan W Adams ((uint8_t *)buf)[size] = KMEM_REDZONE_BYTE; 2498dce01e3fSJonathan W Adams ((uint32_t *)btp)[1] = KMEM_SIZE_ENCODE(size); 2499dce01e3fSJonathan W Adams 2500dce01e3fSJonathan W Adams if (cp->cache_flags & KMF_LITE) { 2501dce01e3fSJonathan W Adams KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller()); 2502dce01e3fSJonathan W Adams } 2503dce01e3fSJonathan W Adams } 25047c478bd9Sstevel@tonic-gate return (buf); 25057c478bd9Sstevel@tonic-gate } 25067c478bd9Sstevel@tonic-gate 25077c478bd9Sstevel@tonic-gate void 25087c478bd9Sstevel@tonic-gate kmem_free(void *buf, size_t size) 25097c478bd9Sstevel@tonic-gate { 2510dce01e3fSJonathan W Adams size_t index; 2511dce01e3fSJonathan W Adams kmem_cache_t *cp; 25127c478bd9Sstevel@tonic-gate 2513dce01e3fSJonathan W Adams if ((index = (size - 1) >> KMEM_ALIGN_SHIFT) < KMEM_ALLOC_TABLE_MAX) { 2514dce01e3fSJonathan W Adams cp = kmem_alloc_table[index]; 2515dce01e3fSJonathan W Adams /* fall through to kmem_cache_free() */ 2516dce01e3fSJonathan W Adams 2517dce01e3fSJonathan W Adams } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) < 2518dce01e3fSJonathan W Adams kmem_big_alloc_table_max) { 2519dce01e3fSJonathan W Adams cp = kmem_big_alloc_table[index]; 2520dce01e3fSJonathan W Adams /* fall through to kmem_cache_free() */ 2521dce01e3fSJonathan W Adams 2522dce01e3fSJonathan W Adams } else { 2523dce01e3fSJonathan W Adams if (buf == NULL && size == 0) 2524dce01e3fSJonathan W Adams return; 2525dce01e3fSJonathan W Adams vmem_free(kmem_oversize_arena, buf, size); 2526dce01e3fSJonathan W Adams return; 2527dce01e3fSJonathan W Adams } 2528dce01e3fSJonathan W Adams 2529dce01e3fSJonathan W Adams if (cp->cache_flags & KMF_BUFTAG) { 2530dce01e3fSJonathan W Adams kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2531dce01e3fSJonathan W Adams uint32_t *ip = (uint32_t *)btp; 2532dce01e3fSJonathan W Adams if (ip[1] != KMEM_SIZE_ENCODE(size)) { 2533dce01e3fSJonathan W Adams if (*(uint64_t *)buf == KMEM_FREE_PATTERN) { 2534dce01e3fSJonathan W Adams kmem_error(KMERR_DUPFREE, cp, buf); 25357c478bd9Sstevel@tonic-gate return; 25367c478bd9Sstevel@tonic-gate } 2537dce01e3fSJonathan W Adams if (KMEM_SIZE_VALID(ip[1])) { 2538dce01e3fSJonathan W Adams ip[0] = KMEM_SIZE_ENCODE(size); 2539dce01e3fSJonathan W Adams kmem_error(KMERR_BADSIZE, cp, buf); 2540dce01e3fSJonathan W Adams } else { 25417c478bd9Sstevel@tonic-gate kmem_error(KMERR_REDZONE, cp, buf); 25427c478bd9Sstevel@tonic-gate } 2543dce01e3fSJonathan W Adams return; 25447c478bd9Sstevel@tonic-gate } 2545dce01e3fSJonathan W Adams if (((uint8_t *)buf)[size] != KMEM_REDZONE_BYTE) { 2546dce01e3fSJonathan W Adams kmem_error(KMERR_REDZONE, cp, buf); 25477c478bd9Sstevel@tonic-gate return; 2548dce01e3fSJonathan W Adams } 2549dce01e3fSJonathan W Adams btp->bt_redzone = KMEM_REDZONE_PATTERN; 2550dce01e3fSJonathan W Adams if (cp->cache_flags & KMF_LITE) { 2551dce01e3fSJonathan W Adams KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, 2552dce01e3fSJonathan W Adams caller()); 2553dce01e3fSJonathan W Adams } 25547c478bd9Sstevel@tonic-gate } 2555dce01e3fSJonathan W Adams kmem_cache_free(cp, buf); 25567c478bd9Sstevel@tonic-gate } 25577c478bd9Sstevel@tonic-gate 25587c478bd9Sstevel@tonic-gate void * 25597c478bd9Sstevel@tonic-gate kmem_firewall_va_alloc(vmem_t *vmp, size_t size, int vmflag) 25607c478bd9Sstevel@tonic-gate { 25617c478bd9Sstevel@tonic-gate size_t realsize = size + vmp->vm_quantum; 25627c478bd9Sstevel@tonic-gate void *addr; 25637c478bd9Sstevel@tonic-gate 25647c478bd9Sstevel@tonic-gate /* 25657c478bd9Sstevel@tonic-gate * Annoying edge case: if 'size' is just shy of ULONG_MAX, adding 25667c478bd9Sstevel@tonic-gate * vm_quantum will cause integer wraparound. Check for this, and 25677c478bd9Sstevel@tonic-gate * blow off the firewall page in this case. Note that such a 25687c478bd9Sstevel@tonic-gate * giant allocation (the entire kernel address space) can never 25697c478bd9Sstevel@tonic-gate * be satisfied, so it will either fail immediately (VM_NOSLEEP) 25707c478bd9Sstevel@tonic-gate * or sleep forever (VM_SLEEP). Thus, there is no need for a 25717c478bd9Sstevel@tonic-gate * corresponding check in kmem_firewall_va_free(). 25727c478bd9Sstevel@tonic-gate */ 25737c478bd9Sstevel@tonic-gate if (realsize < size) 25747c478bd9Sstevel@tonic-gate realsize = size; 25757c478bd9Sstevel@tonic-gate 25767c478bd9Sstevel@tonic-gate /* 25777c478bd9Sstevel@tonic-gate * While boot still owns resource management, make sure that this 25787c478bd9Sstevel@tonic-gate * redzone virtual address allocation is properly accounted for in 25797c478bd9Sstevel@tonic-gate * OBPs "virtual-memory" "available" lists because we're 25807c478bd9Sstevel@tonic-gate * effectively claiming them for a red zone. If we don't do this, 25817c478bd9Sstevel@tonic-gate * the available lists become too fragmented and too large for the 25827c478bd9Sstevel@tonic-gate * current boot/kernel memory list interface. 25837c478bd9Sstevel@tonic-gate */ 25847c478bd9Sstevel@tonic-gate addr = vmem_alloc(vmp, realsize, vmflag | VM_NEXTFIT); 25857c478bd9Sstevel@tonic-gate 25867c478bd9Sstevel@tonic-gate if (addr != NULL && kvseg.s_base == NULL && realsize != size) 25877c478bd9Sstevel@tonic-gate (void) boot_virt_alloc((char *)addr + size, vmp->vm_quantum); 25887c478bd9Sstevel@tonic-gate 25897c478bd9Sstevel@tonic-gate return (addr); 25907c478bd9Sstevel@tonic-gate } 25917c478bd9Sstevel@tonic-gate 25927c478bd9Sstevel@tonic-gate void 25937c478bd9Sstevel@tonic-gate kmem_firewall_va_free(vmem_t *vmp, void *addr, size_t size) 25947c478bd9Sstevel@tonic-gate { 25957c478bd9Sstevel@tonic-gate ASSERT((kvseg.s_base == NULL ? 25967c478bd9Sstevel@tonic-gate va_to_pfn((char *)addr + size) : 25977c478bd9Sstevel@tonic-gate hat_getpfnum(kas.a_hat, (caddr_t)addr + size)) == PFN_INVALID); 25987c478bd9Sstevel@tonic-gate 25997c478bd9Sstevel@tonic-gate vmem_free(vmp, addr, size + vmp->vm_quantum); 26007c478bd9Sstevel@tonic-gate } 26017c478bd9Sstevel@tonic-gate 26027c478bd9Sstevel@tonic-gate /* 26037c478bd9Sstevel@tonic-gate * Try to allocate at least `size' bytes of memory without sleeping or 26047c478bd9Sstevel@tonic-gate * panicking. Return actual allocated size in `asize'. If allocation failed, 26057c478bd9Sstevel@tonic-gate * try final allocation with sleep or panic allowed. 26067c478bd9Sstevel@tonic-gate */ 26077c478bd9Sstevel@tonic-gate void * 26087c478bd9Sstevel@tonic-gate kmem_alloc_tryhard(size_t size, size_t *asize, int kmflag) 26097c478bd9Sstevel@tonic-gate { 26107c478bd9Sstevel@tonic-gate void *p; 26117c478bd9Sstevel@tonic-gate 26127c478bd9Sstevel@tonic-gate *asize = P2ROUNDUP(size, KMEM_ALIGN); 26137c478bd9Sstevel@tonic-gate do { 26147c478bd9Sstevel@tonic-gate p = kmem_alloc(*asize, (kmflag | KM_NOSLEEP) & ~KM_PANIC); 26157c478bd9Sstevel@tonic-gate if (p != NULL) 26167c478bd9Sstevel@tonic-gate return (p); 26177c478bd9Sstevel@tonic-gate *asize += KMEM_ALIGN; 26187c478bd9Sstevel@tonic-gate } while (*asize <= PAGESIZE); 26197c478bd9Sstevel@tonic-gate 26207c478bd9Sstevel@tonic-gate *asize = P2ROUNDUP(size, KMEM_ALIGN); 26217c478bd9Sstevel@tonic-gate return (kmem_alloc(*asize, kmflag)); 26227c478bd9Sstevel@tonic-gate } 26237c478bd9Sstevel@tonic-gate 26247c478bd9Sstevel@tonic-gate /* 26257c478bd9Sstevel@tonic-gate * Reclaim all unused memory from a cache. 26267c478bd9Sstevel@tonic-gate */ 26277c478bd9Sstevel@tonic-gate static void 26287c478bd9Sstevel@tonic-gate kmem_cache_reap(kmem_cache_t *cp) 26297c478bd9Sstevel@tonic-gate { 2630b5fca8f8Stomee ASSERT(taskq_member(kmem_taskq, curthread)); 2631*686031edSTom Erickson cp->cache_reap++; 2632b5fca8f8Stomee 26337c478bd9Sstevel@tonic-gate /* 26347c478bd9Sstevel@tonic-gate * Ask the cache's owner to free some memory if possible. 26357c478bd9Sstevel@tonic-gate * The idea is to handle things like the inode cache, which 26367c478bd9Sstevel@tonic-gate * typically sits on a bunch of memory that it doesn't truly 26377c478bd9Sstevel@tonic-gate * *need*. Reclaim policy is entirely up to the owner; this 26387c478bd9Sstevel@tonic-gate * callback is just an advisory plea for help. 26397c478bd9Sstevel@tonic-gate */ 2640b5fca8f8Stomee if (cp->cache_reclaim != NULL) { 2641b5fca8f8Stomee long delta; 2642b5fca8f8Stomee 2643b5fca8f8Stomee /* 2644b5fca8f8Stomee * Reclaimed memory should be reapable (not included in the 2645b5fca8f8Stomee * depot's working set). 2646b5fca8f8Stomee */ 2647b5fca8f8Stomee delta = cp->cache_full.ml_total; 26487c478bd9Sstevel@tonic-gate cp->cache_reclaim(cp->cache_private); 2649b5fca8f8Stomee delta = cp->cache_full.ml_total - delta; 2650b5fca8f8Stomee if (delta > 0) { 2651b5fca8f8Stomee mutex_enter(&cp->cache_depot_lock); 2652b5fca8f8Stomee cp->cache_full.ml_reaplimit += delta; 2653b5fca8f8Stomee cp->cache_full.ml_min += delta; 2654b5fca8f8Stomee mutex_exit(&cp->cache_depot_lock); 2655b5fca8f8Stomee } 2656b5fca8f8Stomee } 26577c478bd9Sstevel@tonic-gate 26587c478bd9Sstevel@tonic-gate kmem_depot_ws_reap(cp); 2659b5fca8f8Stomee 2660b5fca8f8Stomee if (cp->cache_defrag != NULL && !kmem_move_noreap) { 2661b5fca8f8Stomee kmem_cache_defrag(cp); 2662b5fca8f8Stomee } 26637c478bd9Sstevel@tonic-gate } 26647c478bd9Sstevel@tonic-gate 26657c478bd9Sstevel@tonic-gate static void 26667c478bd9Sstevel@tonic-gate kmem_reap_timeout(void *flag_arg) 26677c478bd9Sstevel@tonic-gate { 26687c478bd9Sstevel@tonic-gate uint32_t *flag = (uint32_t *)flag_arg; 26697c478bd9Sstevel@tonic-gate 26707c478bd9Sstevel@tonic-gate ASSERT(flag == &kmem_reaping || flag == &kmem_reaping_idspace); 26717c478bd9Sstevel@tonic-gate *flag = 0; 26727c478bd9Sstevel@tonic-gate } 26737c478bd9Sstevel@tonic-gate 26747c478bd9Sstevel@tonic-gate static void 26757c478bd9Sstevel@tonic-gate kmem_reap_done(void *flag) 26767c478bd9Sstevel@tonic-gate { 26777c478bd9Sstevel@tonic-gate (void) timeout(kmem_reap_timeout, flag, kmem_reap_interval); 26787c478bd9Sstevel@tonic-gate } 26797c478bd9Sstevel@tonic-gate 26807c478bd9Sstevel@tonic-gate static void 26817c478bd9Sstevel@tonic-gate kmem_reap_start(void *flag) 26827c478bd9Sstevel@tonic-gate { 26837c478bd9Sstevel@tonic-gate ASSERT(flag == &kmem_reaping || flag == &kmem_reaping_idspace); 26847c478bd9Sstevel@tonic-gate 26857c478bd9Sstevel@tonic-gate if (flag == &kmem_reaping) { 26867c478bd9Sstevel@tonic-gate kmem_cache_applyall(kmem_cache_reap, kmem_taskq, TQ_NOSLEEP); 26877c478bd9Sstevel@tonic-gate /* 26887c478bd9Sstevel@tonic-gate * if we have segkp under heap, reap segkp cache. 26897c478bd9Sstevel@tonic-gate */ 26907c478bd9Sstevel@tonic-gate if (segkp_fromheap) 26917c478bd9Sstevel@tonic-gate segkp_cache_free(); 26927c478bd9Sstevel@tonic-gate } 26937c478bd9Sstevel@tonic-gate else 26947c478bd9Sstevel@tonic-gate kmem_cache_applyall_id(kmem_cache_reap, kmem_taskq, TQ_NOSLEEP); 26957c478bd9Sstevel@tonic-gate 26967c478bd9Sstevel@tonic-gate /* 26977c478bd9Sstevel@tonic-gate * We use taskq_dispatch() to schedule a timeout to clear 26987c478bd9Sstevel@tonic-gate * the flag so that kmem_reap() becomes self-throttling: 26997c478bd9Sstevel@tonic-gate * we won't reap again until the current reap completes *and* 27007c478bd9Sstevel@tonic-gate * at least kmem_reap_interval ticks have elapsed. 27017c478bd9Sstevel@tonic-gate */ 27027c478bd9Sstevel@tonic-gate if (!taskq_dispatch(kmem_taskq, kmem_reap_done, flag, TQ_NOSLEEP)) 27037c478bd9Sstevel@tonic-gate kmem_reap_done(flag); 27047c478bd9Sstevel@tonic-gate } 27057c478bd9Sstevel@tonic-gate 27067c478bd9Sstevel@tonic-gate static void 27077c478bd9Sstevel@tonic-gate kmem_reap_common(void *flag_arg) 27087c478bd9Sstevel@tonic-gate { 27097c478bd9Sstevel@tonic-gate uint32_t *flag = (uint32_t *)flag_arg; 27107c478bd9Sstevel@tonic-gate 27117c478bd9Sstevel@tonic-gate if (MUTEX_HELD(&kmem_cache_lock) || kmem_taskq == NULL || 27127c478bd9Sstevel@tonic-gate cas32(flag, 0, 1) != 0) 27137c478bd9Sstevel@tonic-gate return; 27147c478bd9Sstevel@tonic-gate 27157c478bd9Sstevel@tonic-gate /* 27167c478bd9Sstevel@tonic-gate * It may not be kosher to do memory allocation when a reap is called 27177c478bd9Sstevel@tonic-gate * is called (for example, if vmem_populate() is in the call chain). 27187c478bd9Sstevel@tonic-gate * So we start the reap going with a TQ_NOALLOC dispatch. If the 27197c478bd9Sstevel@tonic-gate * dispatch fails, we reset the flag, and the next reap will try again. 27207c478bd9Sstevel@tonic-gate */ 27217c478bd9Sstevel@tonic-gate if (!taskq_dispatch(kmem_taskq, kmem_reap_start, flag, TQ_NOALLOC)) 27227c478bd9Sstevel@tonic-gate *flag = 0; 27237c478bd9Sstevel@tonic-gate } 27247c478bd9Sstevel@tonic-gate 27257c478bd9Sstevel@tonic-gate /* 27267c478bd9Sstevel@tonic-gate * Reclaim all unused memory from all caches. Called from the VM system 27277c478bd9Sstevel@tonic-gate * when memory gets tight. 27287c478bd9Sstevel@tonic-gate */ 27297c478bd9Sstevel@tonic-gate void 27307c478bd9Sstevel@tonic-gate kmem_reap(void) 27317c478bd9Sstevel@tonic-gate { 27327c478bd9Sstevel@tonic-gate kmem_reap_common(&kmem_reaping); 27337c478bd9Sstevel@tonic-gate } 27347c478bd9Sstevel@tonic-gate 27357c478bd9Sstevel@tonic-gate /* 27367c478bd9Sstevel@tonic-gate * Reclaim all unused memory from identifier arenas, called when a vmem 27377c478bd9Sstevel@tonic-gate * arena not back by memory is exhausted. Since reaping memory-backed caches 27387c478bd9Sstevel@tonic-gate * cannot help with identifier exhaustion, we avoid both a large amount of 27397c478bd9Sstevel@tonic-gate * work and unwanted side-effects from reclaim callbacks. 27407c478bd9Sstevel@tonic-gate */ 27417c478bd9Sstevel@tonic-gate void 27427c478bd9Sstevel@tonic-gate kmem_reap_idspace(void) 27437c478bd9Sstevel@tonic-gate { 27447c478bd9Sstevel@tonic-gate kmem_reap_common(&kmem_reaping_idspace); 27457c478bd9Sstevel@tonic-gate } 27467c478bd9Sstevel@tonic-gate 27477c478bd9Sstevel@tonic-gate /* 27487c478bd9Sstevel@tonic-gate * Purge all magazines from a cache and set its magazine limit to zero. 27497c478bd9Sstevel@tonic-gate * All calls are serialized by the kmem_taskq lock, except for the final 27507c478bd9Sstevel@tonic-gate * call from kmem_cache_destroy(). 27517c478bd9Sstevel@tonic-gate */ 27527c478bd9Sstevel@tonic-gate static void 27537c478bd9Sstevel@tonic-gate kmem_cache_magazine_purge(kmem_cache_t *cp) 27547c478bd9Sstevel@tonic-gate { 27557c478bd9Sstevel@tonic-gate kmem_cpu_cache_t *ccp; 27567c478bd9Sstevel@tonic-gate kmem_magazine_t *mp, *pmp; 27577c478bd9Sstevel@tonic-gate int rounds, prounds, cpu_seqid; 27587c478bd9Sstevel@tonic-gate 2759b5fca8f8Stomee ASSERT(!list_link_active(&cp->cache_link) || 2760b5fca8f8Stomee taskq_member(kmem_taskq, curthread)); 27617c478bd9Sstevel@tonic-gate ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 27627c478bd9Sstevel@tonic-gate 27637c478bd9Sstevel@tonic-gate for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 27647c478bd9Sstevel@tonic-gate ccp = &cp->cache_cpu[cpu_seqid]; 27657c478bd9Sstevel@tonic-gate 27667c478bd9Sstevel@tonic-gate mutex_enter(&ccp->cc_lock); 27677c478bd9Sstevel@tonic-gate mp = ccp->cc_loaded; 27687c478bd9Sstevel@tonic-gate pmp = ccp->cc_ploaded; 27697c478bd9Sstevel@tonic-gate rounds = ccp->cc_rounds; 27707c478bd9Sstevel@tonic-gate prounds = ccp->cc_prounds; 27717c478bd9Sstevel@tonic-gate ccp->cc_loaded = NULL; 27727c478bd9Sstevel@tonic-gate ccp->cc_ploaded = NULL; 27737c478bd9Sstevel@tonic-gate ccp->cc_rounds = -1; 27747c478bd9Sstevel@tonic-gate ccp->cc_prounds = -1; 27757c478bd9Sstevel@tonic-gate ccp->cc_magsize = 0; 27767c478bd9Sstevel@tonic-gate mutex_exit(&ccp->cc_lock); 27777c478bd9Sstevel@tonic-gate 27787c478bd9Sstevel@tonic-gate if (mp) 27797c478bd9Sstevel@tonic-gate kmem_magazine_destroy(cp, mp, rounds); 27807c478bd9Sstevel@tonic-gate if (pmp) 27817c478bd9Sstevel@tonic-gate kmem_magazine_destroy(cp, pmp, prounds); 27827c478bd9Sstevel@tonic-gate } 27837c478bd9Sstevel@tonic-gate 27847c478bd9Sstevel@tonic-gate /* 27857c478bd9Sstevel@tonic-gate * Updating the working set statistics twice in a row has the 27867c478bd9Sstevel@tonic-gate * effect of setting the working set size to zero, so everything 27877c478bd9Sstevel@tonic-gate * is eligible for reaping. 27887c478bd9Sstevel@tonic-gate */ 27897c478bd9Sstevel@tonic-gate kmem_depot_ws_update(cp); 27907c478bd9Sstevel@tonic-gate kmem_depot_ws_update(cp); 27917c478bd9Sstevel@tonic-gate 27927c478bd9Sstevel@tonic-gate kmem_depot_ws_reap(cp); 27937c478bd9Sstevel@tonic-gate } 27947c478bd9Sstevel@tonic-gate 27957c478bd9Sstevel@tonic-gate /* 27967c478bd9Sstevel@tonic-gate * Enable per-cpu magazines on a cache. 27977c478bd9Sstevel@tonic-gate */ 27987c478bd9Sstevel@tonic-gate static void 27997c478bd9Sstevel@tonic-gate kmem_cache_magazine_enable(kmem_cache_t *cp) 28007c478bd9Sstevel@tonic-gate { 28017c478bd9Sstevel@tonic-gate int cpu_seqid; 28027c478bd9Sstevel@tonic-gate 28037c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_NOMAGAZINE) 28047c478bd9Sstevel@tonic-gate return; 28057c478bd9Sstevel@tonic-gate 28067c478bd9Sstevel@tonic-gate for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 28077c478bd9Sstevel@tonic-gate kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid]; 28087c478bd9Sstevel@tonic-gate mutex_enter(&ccp->cc_lock); 28097c478bd9Sstevel@tonic-gate ccp->cc_magsize = cp->cache_magtype->mt_magsize; 28107c478bd9Sstevel@tonic-gate mutex_exit(&ccp->cc_lock); 28117c478bd9Sstevel@tonic-gate } 28127c478bd9Sstevel@tonic-gate 28137c478bd9Sstevel@tonic-gate } 28147c478bd9Sstevel@tonic-gate 2815fa9e4066Sahrens /* 2816fa9e4066Sahrens * Reap (almost) everything right now. See kmem_cache_magazine_purge() 2817fa9e4066Sahrens * for explanation of the back-to-back kmem_depot_ws_update() calls. 2818fa9e4066Sahrens */ 2819fa9e4066Sahrens void 2820fa9e4066Sahrens kmem_cache_reap_now(kmem_cache_t *cp) 2821fa9e4066Sahrens { 2822b5fca8f8Stomee ASSERT(list_link_active(&cp->cache_link)); 2823b5fca8f8Stomee 2824fa9e4066Sahrens kmem_depot_ws_update(cp); 2825fa9e4066Sahrens kmem_depot_ws_update(cp); 2826fa9e4066Sahrens 2827fa9e4066Sahrens (void) taskq_dispatch(kmem_taskq, 2828fa9e4066Sahrens (task_func_t *)kmem_depot_ws_reap, cp, TQ_SLEEP); 2829fa9e4066Sahrens taskq_wait(kmem_taskq); 2830fa9e4066Sahrens } 2831fa9e4066Sahrens 28327c478bd9Sstevel@tonic-gate /* 28337c478bd9Sstevel@tonic-gate * Recompute a cache's magazine size. The trade-off is that larger magazines 28347c478bd9Sstevel@tonic-gate * provide a higher transfer rate with the depot, while smaller magazines 28357c478bd9Sstevel@tonic-gate * reduce memory consumption. Magazine resizing is an expensive operation; 28367c478bd9Sstevel@tonic-gate * it should not be done frequently. 28377c478bd9Sstevel@tonic-gate * 28387c478bd9Sstevel@tonic-gate * Changes to the magazine size are serialized by the kmem_taskq lock. 28397c478bd9Sstevel@tonic-gate * 28407c478bd9Sstevel@tonic-gate * Note: at present this only grows the magazine size. It might be useful 28417c478bd9Sstevel@tonic-gate * to allow shrinkage too. 28427c478bd9Sstevel@tonic-gate */ 28437c478bd9Sstevel@tonic-gate static void 28447c478bd9Sstevel@tonic-gate kmem_cache_magazine_resize(kmem_cache_t *cp) 28457c478bd9Sstevel@tonic-gate { 28467c478bd9Sstevel@tonic-gate kmem_magtype_t *mtp = cp->cache_magtype; 28477c478bd9Sstevel@tonic-gate 28487c478bd9Sstevel@tonic-gate ASSERT(taskq_member(kmem_taskq, curthread)); 28497c478bd9Sstevel@tonic-gate 28507c478bd9Sstevel@tonic-gate if (cp->cache_chunksize < mtp->mt_maxbuf) { 28517c478bd9Sstevel@tonic-gate kmem_cache_magazine_purge(cp); 28527c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_depot_lock); 28537c478bd9Sstevel@tonic-gate cp->cache_magtype = ++mtp; 28547c478bd9Sstevel@tonic-gate cp->cache_depot_contention_prev = 28557c478bd9Sstevel@tonic-gate cp->cache_depot_contention + INT_MAX; 28567c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_depot_lock); 28577c478bd9Sstevel@tonic-gate kmem_cache_magazine_enable(cp); 28587c478bd9Sstevel@tonic-gate } 28597c478bd9Sstevel@tonic-gate } 28607c478bd9Sstevel@tonic-gate 28617c478bd9Sstevel@tonic-gate /* 28627c478bd9Sstevel@tonic-gate * Rescale a cache's hash table, so that the table size is roughly the 28637c478bd9Sstevel@tonic-gate * cache size. We want the average lookup time to be extremely small. 28647c478bd9Sstevel@tonic-gate */ 28657c478bd9Sstevel@tonic-gate static void 28667c478bd9Sstevel@tonic-gate kmem_hash_rescale(kmem_cache_t *cp) 28677c478bd9Sstevel@tonic-gate { 28687c478bd9Sstevel@tonic-gate kmem_bufctl_t **old_table, **new_table, *bcp; 28697c478bd9Sstevel@tonic-gate size_t old_size, new_size, h; 28707c478bd9Sstevel@tonic-gate 28717c478bd9Sstevel@tonic-gate ASSERT(taskq_member(kmem_taskq, curthread)); 28727c478bd9Sstevel@tonic-gate 28737c478bd9Sstevel@tonic-gate new_size = MAX(KMEM_HASH_INITIAL, 28747c478bd9Sstevel@tonic-gate 1 << (highbit(3 * cp->cache_buftotal + 4) - 2)); 28757c478bd9Sstevel@tonic-gate old_size = cp->cache_hash_mask + 1; 28767c478bd9Sstevel@tonic-gate 28777c478bd9Sstevel@tonic-gate if ((old_size >> 1) <= new_size && new_size <= (old_size << 1)) 28787c478bd9Sstevel@tonic-gate return; 28797c478bd9Sstevel@tonic-gate 28807c478bd9Sstevel@tonic-gate new_table = vmem_alloc(kmem_hash_arena, new_size * sizeof (void *), 28817c478bd9Sstevel@tonic-gate VM_NOSLEEP); 28827c478bd9Sstevel@tonic-gate if (new_table == NULL) 28837c478bd9Sstevel@tonic-gate return; 28847c478bd9Sstevel@tonic-gate bzero(new_table, new_size * sizeof (void *)); 28857c478bd9Sstevel@tonic-gate 28867c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_lock); 28877c478bd9Sstevel@tonic-gate 28887c478bd9Sstevel@tonic-gate old_size = cp->cache_hash_mask + 1; 28897c478bd9Sstevel@tonic-gate old_table = cp->cache_hash_table; 28907c478bd9Sstevel@tonic-gate 28917c478bd9Sstevel@tonic-gate cp->cache_hash_mask = new_size - 1; 28927c478bd9Sstevel@tonic-gate cp->cache_hash_table = new_table; 28937c478bd9Sstevel@tonic-gate cp->cache_rescale++; 28947c478bd9Sstevel@tonic-gate 28957c478bd9Sstevel@tonic-gate for (h = 0; h < old_size; h++) { 28967c478bd9Sstevel@tonic-gate bcp = old_table[h]; 28977c478bd9Sstevel@tonic-gate while (bcp != NULL) { 28987c478bd9Sstevel@tonic-gate void *addr = bcp->bc_addr; 28997c478bd9Sstevel@tonic-gate kmem_bufctl_t *next_bcp = bcp->bc_next; 29007c478bd9Sstevel@tonic-gate kmem_bufctl_t **hash_bucket = KMEM_HASH(cp, addr); 29017c478bd9Sstevel@tonic-gate bcp->bc_next = *hash_bucket; 29027c478bd9Sstevel@tonic-gate *hash_bucket = bcp; 29037c478bd9Sstevel@tonic-gate bcp = next_bcp; 29047c478bd9Sstevel@tonic-gate } 29057c478bd9Sstevel@tonic-gate } 29067c478bd9Sstevel@tonic-gate 29077c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_lock); 29087c478bd9Sstevel@tonic-gate 29097c478bd9Sstevel@tonic-gate vmem_free(kmem_hash_arena, old_table, old_size * sizeof (void *)); 29107c478bd9Sstevel@tonic-gate } 29117c478bd9Sstevel@tonic-gate 29127c478bd9Sstevel@tonic-gate /* 2913b5fca8f8Stomee * Perform periodic maintenance on a cache: hash rescaling, depot working-set 2914b5fca8f8Stomee * update, magazine resizing, and slab consolidation. 29157c478bd9Sstevel@tonic-gate */ 29167c478bd9Sstevel@tonic-gate static void 29177c478bd9Sstevel@tonic-gate kmem_cache_update(kmem_cache_t *cp) 29187c478bd9Sstevel@tonic-gate { 29197c478bd9Sstevel@tonic-gate int need_hash_rescale = 0; 29207c478bd9Sstevel@tonic-gate int need_magazine_resize = 0; 29217c478bd9Sstevel@tonic-gate 29227c478bd9Sstevel@tonic-gate ASSERT(MUTEX_HELD(&kmem_cache_lock)); 29237c478bd9Sstevel@tonic-gate 29247c478bd9Sstevel@tonic-gate /* 29257c478bd9Sstevel@tonic-gate * If the cache has become much larger or smaller than its hash table, 29267c478bd9Sstevel@tonic-gate * fire off a request to rescale the hash table. 29277c478bd9Sstevel@tonic-gate */ 29287c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_lock); 29297c478bd9Sstevel@tonic-gate 29307c478bd9Sstevel@tonic-gate if ((cp->cache_flags & KMF_HASH) && 29317c478bd9Sstevel@tonic-gate (cp->cache_buftotal > (cp->cache_hash_mask << 1) || 29327c478bd9Sstevel@tonic-gate (cp->cache_buftotal < (cp->cache_hash_mask >> 1) && 29337c478bd9Sstevel@tonic-gate cp->cache_hash_mask > KMEM_HASH_INITIAL))) 29347c478bd9Sstevel@tonic-gate need_hash_rescale = 1; 29357c478bd9Sstevel@tonic-gate 29367c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_lock); 29377c478bd9Sstevel@tonic-gate 29387c478bd9Sstevel@tonic-gate /* 29397c478bd9Sstevel@tonic-gate * Update the depot working set statistics. 29407c478bd9Sstevel@tonic-gate */ 29417c478bd9Sstevel@tonic-gate kmem_depot_ws_update(cp); 29427c478bd9Sstevel@tonic-gate 29437c478bd9Sstevel@tonic-gate /* 29447c478bd9Sstevel@tonic-gate * If there's a lot of contention in the depot, 29457c478bd9Sstevel@tonic-gate * increase the magazine size. 29467c478bd9Sstevel@tonic-gate */ 29477c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_depot_lock); 29487c478bd9Sstevel@tonic-gate 29497c478bd9Sstevel@tonic-gate if (cp->cache_chunksize < cp->cache_magtype->mt_maxbuf && 29507c478bd9Sstevel@tonic-gate (int)(cp->cache_depot_contention - 29517c478bd9Sstevel@tonic-gate cp->cache_depot_contention_prev) > kmem_depot_contention) 29527c478bd9Sstevel@tonic-gate need_magazine_resize = 1; 29537c478bd9Sstevel@tonic-gate 29547c478bd9Sstevel@tonic-gate cp->cache_depot_contention_prev = cp->cache_depot_contention; 29557c478bd9Sstevel@tonic-gate 29567c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_depot_lock); 29577c478bd9Sstevel@tonic-gate 29587c478bd9Sstevel@tonic-gate if (need_hash_rescale) 29597c478bd9Sstevel@tonic-gate (void) taskq_dispatch(kmem_taskq, 29607c478bd9Sstevel@tonic-gate (task_func_t *)kmem_hash_rescale, cp, TQ_NOSLEEP); 29617c478bd9Sstevel@tonic-gate 29627c478bd9Sstevel@tonic-gate if (need_magazine_resize) 29637c478bd9Sstevel@tonic-gate (void) taskq_dispatch(kmem_taskq, 29647c478bd9Sstevel@tonic-gate (task_func_t *)kmem_cache_magazine_resize, cp, TQ_NOSLEEP); 2965b5fca8f8Stomee 2966b5fca8f8Stomee if (cp->cache_defrag != NULL) 2967b5fca8f8Stomee (void) taskq_dispatch(kmem_taskq, 2968b5fca8f8Stomee (task_func_t *)kmem_cache_scan, cp, TQ_NOSLEEP); 29697c478bd9Sstevel@tonic-gate } 29707c478bd9Sstevel@tonic-gate 2971d67944fbSScott Rotondo static void kmem_update(void *); 2972d67944fbSScott Rotondo 29737c478bd9Sstevel@tonic-gate static void 29747c478bd9Sstevel@tonic-gate kmem_update_timeout(void *dummy) 29757c478bd9Sstevel@tonic-gate { 29767c478bd9Sstevel@tonic-gate (void) timeout(kmem_update, dummy, kmem_reap_interval); 29777c478bd9Sstevel@tonic-gate } 29787c478bd9Sstevel@tonic-gate 29797c478bd9Sstevel@tonic-gate static void 29807c478bd9Sstevel@tonic-gate kmem_update(void *dummy) 29817c478bd9Sstevel@tonic-gate { 29827c478bd9Sstevel@tonic-gate kmem_cache_applyall(kmem_cache_update, NULL, TQ_NOSLEEP); 29837c478bd9Sstevel@tonic-gate 29847c478bd9Sstevel@tonic-gate /* 29857c478bd9Sstevel@tonic-gate * We use taskq_dispatch() to reschedule the timeout so that 29867c478bd9Sstevel@tonic-gate * kmem_update() becomes self-throttling: it won't schedule 29877c478bd9Sstevel@tonic-gate * new tasks until all previous tasks have completed. 29887c478bd9Sstevel@tonic-gate */ 29897c478bd9Sstevel@tonic-gate if (!taskq_dispatch(kmem_taskq, kmem_update_timeout, dummy, TQ_NOSLEEP)) 29907c478bd9Sstevel@tonic-gate kmem_update_timeout(NULL); 29917c478bd9Sstevel@tonic-gate } 29927c478bd9Sstevel@tonic-gate 29937c478bd9Sstevel@tonic-gate static int 29947c478bd9Sstevel@tonic-gate kmem_cache_kstat_update(kstat_t *ksp, int rw) 29957c478bd9Sstevel@tonic-gate { 29967c478bd9Sstevel@tonic-gate struct kmem_cache_kstat *kmcp = &kmem_cache_kstat; 29977c478bd9Sstevel@tonic-gate kmem_cache_t *cp = ksp->ks_private; 29987c478bd9Sstevel@tonic-gate uint64_t cpu_buf_avail; 29997c478bd9Sstevel@tonic-gate uint64_t buf_avail = 0; 30007c478bd9Sstevel@tonic-gate int cpu_seqid; 3001*686031edSTom Erickson long reap; 30027c478bd9Sstevel@tonic-gate 30037c478bd9Sstevel@tonic-gate ASSERT(MUTEX_HELD(&kmem_cache_kstat_lock)); 30047c478bd9Sstevel@tonic-gate 30057c478bd9Sstevel@tonic-gate if (rw == KSTAT_WRITE) 30067c478bd9Sstevel@tonic-gate return (EACCES); 30077c478bd9Sstevel@tonic-gate 30087c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_lock); 30097c478bd9Sstevel@tonic-gate 30107c478bd9Sstevel@tonic-gate kmcp->kmc_alloc_fail.value.ui64 = cp->cache_alloc_fail; 30117c478bd9Sstevel@tonic-gate kmcp->kmc_alloc.value.ui64 = cp->cache_slab_alloc; 30127c478bd9Sstevel@tonic-gate kmcp->kmc_free.value.ui64 = cp->cache_slab_free; 30137c478bd9Sstevel@tonic-gate kmcp->kmc_slab_alloc.value.ui64 = cp->cache_slab_alloc; 30147c478bd9Sstevel@tonic-gate kmcp->kmc_slab_free.value.ui64 = cp->cache_slab_free; 30157c478bd9Sstevel@tonic-gate 30167c478bd9Sstevel@tonic-gate for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 30177c478bd9Sstevel@tonic-gate kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid]; 30187c478bd9Sstevel@tonic-gate 30197c478bd9Sstevel@tonic-gate mutex_enter(&ccp->cc_lock); 30207c478bd9Sstevel@tonic-gate 30217c478bd9Sstevel@tonic-gate cpu_buf_avail = 0; 30227c478bd9Sstevel@tonic-gate if (ccp->cc_rounds > 0) 30237c478bd9Sstevel@tonic-gate cpu_buf_avail += ccp->cc_rounds; 30247c478bd9Sstevel@tonic-gate if (ccp->cc_prounds > 0) 30257c478bd9Sstevel@tonic-gate cpu_buf_avail += ccp->cc_prounds; 30267c478bd9Sstevel@tonic-gate 30277c478bd9Sstevel@tonic-gate kmcp->kmc_alloc.value.ui64 += ccp->cc_alloc; 30287c478bd9Sstevel@tonic-gate kmcp->kmc_free.value.ui64 += ccp->cc_free; 30297c478bd9Sstevel@tonic-gate buf_avail += cpu_buf_avail; 30307c478bd9Sstevel@tonic-gate 30317c478bd9Sstevel@tonic-gate mutex_exit(&ccp->cc_lock); 30327c478bd9Sstevel@tonic-gate } 30337c478bd9Sstevel@tonic-gate 30347c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_depot_lock); 30357c478bd9Sstevel@tonic-gate 30367c478bd9Sstevel@tonic-gate kmcp->kmc_depot_alloc.value.ui64 = cp->cache_full.ml_alloc; 30377c478bd9Sstevel@tonic-gate kmcp->kmc_depot_free.value.ui64 = cp->cache_empty.ml_alloc; 30387c478bd9Sstevel@tonic-gate kmcp->kmc_depot_contention.value.ui64 = cp->cache_depot_contention; 30397c478bd9Sstevel@tonic-gate kmcp->kmc_full_magazines.value.ui64 = cp->cache_full.ml_total; 30407c478bd9Sstevel@tonic-gate kmcp->kmc_empty_magazines.value.ui64 = cp->cache_empty.ml_total; 30417c478bd9Sstevel@tonic-gate kmcp->kmc_magazine_size.value.ui64 = 30427c478bd9Sstevel@tonic-gate (cp->cache_flags & KMF_NOMAGAZINE) ? 30437c478bd9Sstevel@tonic-gate 0 : cp->cache_magtype->mt_magsize; 30447c478bd9Sstevel@tonic-gate 30457c478bd9Sstevel@tonic-gate kmcp->kmc_alloc.value.ui64 += cp->cache_full.ml_alloc; 30467c478bd9Sstevel@tonic-gate kmcp->kmc_free.value.ui64 += cp->cache_empty.ml_alloc; 30477c478bd9Sstevel@tonic-gate buf_avail += cp->cache_full.ml_total * cp->cache_magtype->mt_magsize; 30487c478bd9Sstevel@tonic-gate 3049*686031edSTom Erickson reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min); 3050*686031edSTom Erickson reap = MIN(reap, cp->cache_full.ml_total); 3051*686031edSTom Erickson 30527c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_depot_lock); 30537c478bd9Sstevel@tonic-gate 30547c478bd9Sstevel@tonic-gate kmcp->kmc_buf_size.value.ui64 = cp->cache_bufsize; 30557c478bd9Sstevel@tonic-gate kmcp->kmc_align.value.ui64 = cp->cache_align; 30567c478bd9Sstevel@tonic-gate kmcp->kmc_chunk_size.value.ui64 = cp->cache_chunksize; 30577c478bd9Sstevel@tonic-gate kmcp->kmc_slab_size.value.ui64 = cp->cache_slabsize; 30587c478bd9Sstevel@tonic-gate kmcp->kmc_buf_constructed.value.ui64 = buf_avail; 30599f1b636aStomee buf_avail += cp->cache_bufslab; 30607c478bd9Sstevel@tonic-gate kmcp->kmc_buf_avail.value.ui64 = buf_avail; 30617c478bd9Sstevel@tonic-gate kmcp->kmc_buf_inuse.value.ui64 = cp->cache_buftotal - buf_avail; 30627c478bd9Sstevel@tonic-gate kmcp->kmc_buf_total.value.ui64 = cp->cache_buftotal; 30637c478bd9Sstevel@tonic-gate kmcp->kmc_buf_max.value.ui64 = cp->cache_bufmax; 30647c478bd9Sstevel@tonic-gate kmcp->kmc_slab_create.value.ui64 = cp->cache_slab_create; 30657c478bd9Sstevel@tonic-gate kmcp->kmc_slab_destroy.value.ui64 = cp->cache_slab_destroy; 30667c478bd9Sstevel@tonic-gate kmcp->kmc_hash_size.value.ui64 = (cp->cache_flags & KMF_HASH) ? 30677c478bd9Sstevel@tonic-gate cp->cache_hash_mask + 1 : 0; 30687c478bd9Sstevel@tonic-gate kmcp->kmc_hash_lookup_depth.value.ui64 = cp->cache_lookup_depth; 30697c478bd9Sstevel@tonic-gate kmcp->kmc_hash_rescale.value.ui64 = cp->cache_rescale; 30707c478bd9Sstevel@tonic-gate kmcp->kmc_vmem_source.value.ui64 = cp->cache_arena->vm_id; 3071*686031edSTom Erickson kmcp->kmc_reap.value.ui64 = cp->cache_reap; 30727c478bd9Sstevel@tonic-gate 3073b5fca8f8Stomee if (cp->cache_defrag == NULL) { 3074b5fca8f8Stomee kmcp->kmc_move_callbacks.value.ui64 = 0; 3075b5fca8f8Stomee kmcp->kmc_move_yes.value.ui64 = 0; 3076b5fca8f8Stomee kmcp->kmc_move_no.value.ui64 = 0; 3077b5fca8f8Stomee kmcp->kmc_move_later.value.ui64 = 0; 3078b5fca8f8Stomee kmcp->kmc_move_dont_need.value.ui64 = 0; 3079b5fca8f8Stomee kmcp->kmc_move_dont_know.value.ui64 = 0; 3080b5fca8f8Stomee kmcp->kmc_move_hunt_found.value.ui64 = 0; 3081*686031edSTom Erickson kmcp->kmc_move_slabs_freed.value.ui64 = 0; 3082*686031edSTom Erickson kmcp->kmc_defrag.value.ui64 = 0; 3083*686031edSTom Erickson kmcp->kmc_scan.value.ui64 = 0; 3084*686031edSTom Erickson kmcp->kmc_move_reclaimable.value.ui64 = 0; 3085b5fca8f8Stomee } else { 3086*686031edSTom Erickson int64_t reclaimable; 3087*686031edSTom Erickson 3088b5fca8f8Stomee kmem_defrag_t *kd = cp->cache_defrag; 3089b5fca8f8Stomee kmcp->kmc_move_callbacks.value.ui64 = kd->kmd_callbacks; 3090b5fca8f8Stomee kmcp->kmc_move_yes.value.ui64 = kd->kmd_yes; 3091b5fca8f8Stomee kmcp->kmc_move_no.value.ui64 = kd->kmd_no; 3092b5fca8f8Stomee kmcp->kmc_move_later.value.ui64 = kd->kmd_later; 3093b5fca8f8Stomee kmcp->kmc_move_dont_need.value.ui64 = kd->kmd_dont_need; 3094b5fca8f8Stomee kmcp->kmc_move_dont_know.value.ui64 = kd->kmd_dont_know; 3095b5fca8f8Stomee kmcp->kmc_move_hunt_found.value.ui64 = kd->kmd_hunt_found; 3096*686031edSTom Erickson kmcp->kmc_move_slabs_freed.value.ui64 = kd->kmd_slabs_freed; 3097*686031edSTom Erickson kmcp->kmc_defrag.value.ui64 = kd->kmd_defrags; 3098*686031edSTom Erickson kmcp->kmc_scan.value.ui64 = kd->kmd_scans; 3099*686031edSTom Erickson 3100*686031edSTom Erickson reclaimable = cp->cache_bufslab - (cp->cache_maxchunks - 1); 3101*686031edSTom Erickson reclaimable = MAX(reclaimable, 0); 3102*686031edSTom Erickson reclaimable += ((uint64_t)reap * cp->cache_magtype->mt_magsize); 3103*686031edSTom Erickson kmcp->kmc_move_reclaimable.value.ui64 = reclaimable; 3104b5fca8f8Stomee } 3105b5fca8f8Stomee 31067c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_lock); 31077c478bd9Sstevel@tonic-gate return (0); 31087c478bd9Sstevel@tonic-gate } 31097c478bd9Sstevel@tonic-gate 31107c478bd9Sstevel@tonic-gate /* 31117c478bd9Sstevel@tonic-gate * Return a named statistic about a particular cache. 31127c478bd9Sstevel@tonic-gate * This shouldn't be called very often, so it's currently designed for 31137c478bd9Sstevel@tonic-gate * simplicity (leverages existing kstat support) rather than efficiency. 31147c478bd9Sstevel@tonic-gate */ 31157c478bd9Sstevel@tonic-gate uint64_t 31167c478bd9Sstevel@tonic-gate kmem_cache_stat(kmem_cache_t *cp, char *name) 31177c478bd9Sstevel@tonic-gate { 31187c478bd9Sstevel@tonic-gate int i; 31197c478bd9Sstevel@tonic-gate kstat_t *ksp = cp->cache_kstat; 31207c478bd9Sstevel@tonic-gate kstat_named_t *knp = (kstat_named_t *)&kmem_cache_kstat; 31217c478bd9Sstevel@tonic-gate uint64_t value = 0; 31227c478bd9Sstevel@tonic-gate 31237c478bd9Sstevel@tonic-gate if (ksp != NULL) { 31247c478bd9Sstevel@tonic-gate mutex_enter(&kmem_cache_kstat_lock); 31257c478bd9Sstevel@tonic-gate (void) kmem_cache_kstat_update(ksp, KSTAT_READ); 31267c478bd9Sstevel@tonic-gate for (i = 0; i < ksp->ks_ndata; i++) { 31277c478bd9Sstevel@tonic-gate if (strcmp(knp[i].name, name) == 0) { 31287c478bd9Sstevel@tonic-gate value = knp[i].value.ui64; 31297c478bd9Sstevel@tonic-gate break; 31307c478bd9Sstevel@tonic-gate } 31317c478bd9Sstevel@tonic-gate } 31327c478bd9Sstevel@tonic-gate mutex_exit(&kmem_cache_kstat_lock); 31337c478bd9Sstevel@tonic-gate } 31347c478bd9Sstevel@tonic-gate return (value); 31357c478bd9Sstevel@tonic-gate } 31367c478bd9Sstevel@tonic-gate 31377c478bd9Sstevel@tonic-gate /* 31387c478bd9Sstevel@tonic-gate * Return an estimate of currently available kernel heap memory. 31397c478bd9Sstevel@tonic-gate * On 32-bit systems, physical memory may exceed virtual memory, 31407c478bd9Sstevel@tonic-gate * we just truncate the result at 1GB. 31417c478bd9Sstevel@tonic-gate */ 31427c478bd9Sstevel@tonic-gate size_t 31437c478bd9Sstevel@tonic-gate kmem_avail(void) 31447c478bd9Sstevel@tonic-gate { 31457c478bd9Sstevel@tonic-gate spgcnt_t rmem = availrmem - tune.t_minarmem; 31467c478bd9Sstevel@tonic-gate spgcnt_t fmem = freemem - minfree; 31477c478bd9Sstevel@tonic-gate 31487c478bd9Sstevel@tonic-gate return ((size_t)ptob(MIN(MAX(MIN(rmem, fmem), 0), 31497c478bd9Sstevel@tonic-gate 1 << (30 - PAGESHIFT)))); 31507c478bd9Sstevel@tonic-gate } 31517c478bd9Sstevel@tonic-gate 31527c478bd9Sstevel@tonic-gate /* 31537c478bd9Sstevel@tonic-gate * Return the maximum amount of memory that is (in theory) allocatable 31547c478bd9Sstevel@tonic-gate * from the heap. This may be used as an estimate only since there 31557c478bd9Sstevel@tonic-gate * is no guarentee this space will still be available when an allocation 31567c478bd9Sstevel@tonic-gate * request is made, nor that the space may be allocated in one big request 31577c478bd9Sstevel@tonic-gate * due to kernel heap fragmentation. 31587c478bd9Sstevel@tonic-gate */ 31597c478bd9Sstevel@tonic-gate size_t 31607c478bd9Sstevel@tonic-gate kmem_maxavail(void) 31617c478bd9Sstevel@tonic-gate { 31627c478bd9Sstevel@tonic-gate spgcnt_t pmem = availrmem - tune.t_minarmem; 31637c478bd9Sstevel@tonic-gate spgcnt_t vmem = btop(vmem_size(heap_arena, VMEM_FREE)); 31647c478bd9Sstevel@tonic-gate 31657c478bd9Sstevel@tonic-gate return ((size_t)ptob(MAX(MIN(pmem, vmem), 0))); 31667c478bd9Sstevel@tonic-gate } 31677c478bd9Sstevel@tonic-gate 3168fa9e4066Sahrens /* 3169fa9e4066Sahrens * Indicate whether memory-intensive kmem debugging is enabled. 3170fa9e4066Sahrens */ 3171fa9e4066Sahrens int 3172fa9e4066Sahrens kmem_debugging(void) 3173fa9e4066Sahrens { 3174fa9e4066Sahrens return (kmem_flags & (KMF_AUDIT | KMF_REDZONE)); 3175fa9e4066Sahrens } 3176fa9e4066Sahrens 3177b5fca8f8Stomee /* binning function, sorts finely at the two extremes */ 3178b5fca8f8Stomee #define KMEM_PARTIAL_SLAB_WEIGHT(sp, binshift) \ 3179b5fca8f8Stomee ((((sp)->slab_refcnt <= (binshift)) || \ 3180b5fca8f8Stomee (((sp)->slab_chunks - (sp)->slab_refcnt) <= (binshift))) \ 3181b5fca8f8Stomee ? -(sp)->slab_refcnt \ 3182b5fca8f8Stomee : -((binshift) + ((sp)->slab_refcnt >> (binshift)))) 3183b5fca8f8Stomee 3184b5fca8f8Stomee /* 3185b5fca8f8Stomee * Minimizing the number of partial slabs on the freelist minimizes 3186b5fca8f8Stomee * fragmentation (the ratio of unused buffers held by the slab layer). There are 3187b5fca8f8Stomee * two ways to get a slab off of the freelist: 1) free all the buffers on the 3188b5fca8f8Stomee * slab, and 2) allocate all the buffers on the slab. It follows that we want 3189b5fca8f8Stomee * the most-used slabs at the front of the list where they have the best chance 3190b5fca8f8Stomee * of being completely allocated, and the least-used slabs at a safe distance 3191b5fca8f8Stomee * from the front to improve the odds that the few remaining buffers will all be 3192b5fca8f8Stomee * freed before another allocation can tie up the slab. For that reason a slab 3193b5fca8f8Stomee * with a higher slab_refcnt sorts less than than a slab with a lower 3194b5fca8f8Stomee * slab_refcnt. 3195b5fca8f8Stomee * 3196b5fca8f8Stomee * However, if a slab has at least one buffer that is deemed unfreeable, we 3197b5fca8f8Stomee * would rather have that slab at the front of the list regardless of 3198b5fca8f8Stomee * slab_refcnt, since even one unfreeable buffer makes the entire slab 3199b5fca8f8Stomee * unfreeable. If the client returns KMEM_CBRC_NO in response to a cache_move() 3200b5fca8f8Stomee * callback, the slab is marked unfreeable for as long as it remains on the 3201b5fca8f8Stomee * freelist. 3202b5fca8f8Stomee */ 3203b5fca8f8Stomee static int 3204b5fca8f8Stomee kmem_partial_slab_cmp(const void *p0, const void *p1) 3205b5fca8f8Stomee { 3206b5fca8f8Stomee const kmem_cache_t *cp; 3207b5fca8f8Stomee const kmem_slab_t *s0 = p0; 3208b5fca8f8Stomee const kmem_slab_t *s1 = p1; 3209b5fca8f8Stomee int w0, w1; 3210b5fca8f8Stomee size_t binshift; 3211b5fca8f8Stomee 3212b5fca8f8Stomee ASSERT(KMEM_SLAB_IS_PARTIAL(s0)); 3213b5fca8f8Stomee ASSERT(KMEM_SLAB_IS_PARTIAL(s1)); 3214b5fca8f8Stomee ASSERT(s0->slab_cache == s1->slab_cache); 3215b5fca8f8Stomee cp = s1->slab_cache; 3216b5fca8f8Stomee ASSERT(MUTEX_HELD(&cp->cache_lock)); 3217b5fca8f8Stomee binshift = cp->cache_partial_binshift; 3218b5fca8f8Stomee 3219b5fca8f8Stomee /* weight of first slab */ 3220b5fca8f8Stomee w0 = KMEM_PARTIAL_SLAB_WEIGHT(s0, binshift); 3221b5fca8f8Stomee if (s0->slab_flags & KMEM_SLAB_NOMOVE) { 3222b5fca8f8Stomee w0 -= cp->cache_maxchunks; 3223b5fca8f8Stomee } 3224b5fca8f8Stomee 3225b5fca8f8Stomee /* weight of second slab */ 3226b5fca8f8Stomee w1 = KMEM_PARTIAL_SLAB_WEIGHT(s1, binshift); 3227b5fca8f8Stomee if (s1->slab_flags & KMEM_SLAB_NOMOVE) { 3228b5fca8f8Stomee w1 -= cp->cache_maxchunks; 3229b5fca8f8Stomee } 3230b5fca8f8Stomee 3231b5fca8f8Stomee if (w0 < w1) 3232b5fca8f8Stomee return (-1); 3233b5fca8f8Stomee if (w0 > w1) 3234b5fca8f8Stomee return (1); 3235b5fca8f8Stomee 3236b5fca8f8Stomee /* compare pointer values */ 3237b5fca8f8Stomee if ((uintptr_t)s0 < (uintptr_t)s1) 3238b5fca8f8Stomee return (-1); 3239b5fca8f8Stomee if ((uintptr_t)s0 > (uintptr_t)s1) 3240b5fca8f8Stomee return (1); 3241b5fca8f8Stomee 3242b5fca8f8Stomee return (0); 3243b5fca8f8Stomee } 3244b5fca8f8Stomee 3245b5fca8f8Stomee /* 3246b5fca8f8Stomee * It must be valid to call the destructor (if any) on a newly created object. 3247b5fca8f8Stomee * That is, the constructor (if any) must leave the object in a valid state for 3248b5fca8f8Stomee * the destructor. 3249b5fca8f8Stomee */ 32507c478bd9Sstevel@tonic-gate kmem_cache_t * 32517c478bd9Sstevel@tonic-gate kmem_cache_create( 32527c478bd9Sstevel@tonic-gate char *name, /* descriptive name for this cache */ 32537c478bd9Sstevel@tonic-gate size_t bufsize, /* size of the objects it manages */ 32547c478bd9Sstevel@tonic-gate size_t align, /* required object alignment */ 32557c478bd9Sstevel@tonic-gate int (*constructor)(void *, void *, int), /* object constructor */ 32567c478bd9Sstevel@tonic-gate void (*destructor)(void *, void *), /* object destructor */ 32577c478bd9Sstevel@tonic-gate void (*reclaim)(void *), /* memory reclaim callback */ 32587c478bd9Sstevel@tonic-gate void *private, /* pass-thru arg for constr/destr/reclaim */ 32597c478bd9Sstevel@tonic-gate vmem_t *vmp, /* vmem source for slab allocation */ 32607c478bd9Sstevel@tonic-gate int cflags) /* cache creation flags */ 32617c478bd9Sstevel@tonic-gate { 32627c478bd9Sstevel@tonic-gate int cpu_seqid; 32637c478bd9Sstevel@tonic-gate size_t chunksize; 3264b5fca8f8Stomee kmem_cache_t *cp; 32657c478bd9Sstevel@tonic-gate kmem_magtype_t *mtp; 32667c478bd9Sstevel@tonic-gate size_t csize = KMEM_CACHE_SIZE(max_ncpus); 32677c478bd9Sstevel@tonic-gate 32687c478bd9Sstevel@tonic-gate #ifdef DEBUG 32697c478bd9Sstevel@tonic-gate /* 32707c478bd9Sstevel@tonic-gate * Cache names should conform to the rules for valid C identifiers 32717c478bd9Sstevel@tonic-gate */ 32727c478bd9Sstevel@tonic-gate if (!strident_valid(name)) { 32737c478bd9Sstevel@tonic-gate cmn_err(CE_CONT, 32747c478bd9Sstevel@tonic-gate "kmem_cache_create: '%s' is an invalid cache name\n" 32757c478bd9Sstevel@tonic-gate "cache names must conform to the rules for " 32767c478bd9Sstevel@tonic-gate "C identifiers\n", name); 32777c478bd9Sstevel@tonic-gate } 32787c478bd9Sstevel@tonic-gate #endif /* DEBUG */ 32797c478bd9Sstevel@tonic-gate 32807c478bd9Sstevel@tonic-gate if (vmp == NULL) 32817c478bd9Sstevel@tonic-gate vmp = kmem_default_arena; 32827c478bd9Sstevel@tonic-gate 32837c478bd9Sstevel@tonic-gate /* 32847c478bd9Sstevel@tonic-gate * If this kmem cache has an identifier vmem arena as its source, mark 32857c478bd9Sstevel@tonic-gate * it such to allow kmem_reap_idspace(). 32867c478bd9Sstevel@tonic-gate */ 32877c478bd9Sstevel@tonic-gate ASSERT(!(cflags & KMC_IDENTIFIER)); /* consumer should not set this */ 32887c478bd9Sstevel@tonic-gate if (vmp->vm_cflags & VMC_IDENTIFIER) 32897c478bd9Sstevel@tonic-gate cflags |= KMC_IDENTIFIER; 32907c478bd9Sstevel@tonic-gate 32917c478bd9Sstevel@tonic-gate /* 32927c478bd9Sstevel@tonic-gate * Get a kmem_cache structure. We arrange that cp->cache_cpu[] 32937c478bd9Sstevel@tonic-gate * is aligned on a KMEM_CPU_CACHE_SIZE boundary to prevent 32947c478bd9Sstevel@tonic-gate * false sharing of per-CPU data. 32957c478bd9Sstevel@tonic-gate */ 32967c478bd9Sstevel@tonic-gate cp = vmem_xalloc(kmem_cache_arena, csize, KMEM_CPU_CACHE_SIZE, 32977c478bd9Sstevel@tonic-gate P2NPHASE(csize, KMEM_CPU_CACHE_SIZE), 0, NULL, NULL, VM_SLEEP); 32987c478bd9Sstevel@tonic-gate bzero(cp, csize); 3299b5fca8f8Stomee list_link_init(&cp->cache_link); 33007c478bd9Sstevel@tonic-gate 33017c478bd9Sstevel@tonic-gate if (align == 0) 33027c478bd9Sstevel@tonic-gate align = KMEM_ALIGN; 33037c478bd9Sstevel@tonic-gate 33047c478bd9Sstevel@tonic-gate /* 33057c478bd9Sstevel@tonic-gate * If we're not at least KMEM_ALIGN aligned, we can't use free 33067c478bd9Sstevel@tonic-gate * memory to hold bufctl information (because we can't safely 33077c478bd9Sstevel@tonic-gate * perform word loads and stores on it). 33087c478bd9Sstevel@tonic-gate */ 33097c478bd9Sstevel@tonic-gate if (align < KMEM_ALIGN) 33107c478bd9Sstevel@tonic-gate cflags |= KMC_NOTOUCH; 33117c478bd9Sstevel@tonic-gate 33127c478bd9Sstevel@tonic-gate if ((align & (align - 1)) != 0 || align > vmp->vm_quantum) 33137c478bd9Sstevel@tonic-gate panic("kmem_cache_create: bad alignment %lu", align); 33147c478bd9Sstevel@tonic-gate 33157c478bd9Sstevel@tonic-gate mutex_enter(&kmem_flags_lock); 33167c478bd9Sstevel@tonic-gate if (kmem_flags & KMF_RANDOMIZE) 33177c478bd9Sstevel@tonic-gate kmem_flags = (((kmem_flags | ~KMF_RANDOM) + 1) & KMF_RANDOM) | 33187c478bd9Sstevel@tonic-gate KMF_RANDOMIZE; 33197c478bd9Sstevel@tonic-gate cp->cache_flags = (kmem_flags | cflags) & KMF_DEBUG; 33207c478bd9Sstevel@tonic-gate mutex_exit(&kmem_flags_lock); 33217c478bd9Sstevel@tonic-gate 33227c478bd9Sstevel@tonic-gate /* 33237c478bd9Sstevel@tonic-gate * Make sure all the various flags are reasonable. 33247c478bd9Sstevel@tonic-gate */ 33257c478bd9Sstevel@tonic-gate ASSERT(!(cflags & KMC_NOHASH) || !(cflags & KMC_NOTOUCH)); 33267c478bd9Sstevel@tonic-gate 33277c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_LITE) { 33287c478bd9Sstevel@tonic-gate if (bufsize >= kmem_lite_minsize && 33297c478bd9Sstevel@tonic-gate align <= kmem_lite_maxalign && 33307c478bd9Sstevel@tonic-gate P2PHASE(bufsize, kmem_lite_maxalign) != 0) { 33317c478bd9Sstevel@tonic-gate cp->cache_flags |= KMF_BUFTAG; 33327c478bd9Sstevel@tonic-gate cp->cache_flags &= ~(KMF_AUDIT | KMF_FIREWALL); 33337c478bd9Sstevel@tonic-gate } else { 33347c478bd9Sstevel@tonic-gate cp->cache_flags &= ~KMF_DEBUG; 33357c478bd9Sstevel@tonic-gate } 33367c478bd9Sstevel@tonic-gate } 33377c478bd9Sstevel@tonic-gate 33387c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_DEADBEEF) 33397c478bd9Sstevel@tonic-gate cp->cache_flags |= KMF_REDZONE; 33407c478bd9Sstevel@tonic-gate 33417c478bd9Sstevel@tonic-gate if ((cflags & KMC_QCACHE) && (cp->cache_flags & KMF_AUDIT)) 33427c478bd9Sstevel@tonic-gate cp->cache_flags |= KMF_NOMAGAZINE; 33437c478bd9Sstevel@tonic-gate 33447c478bd9Sstevel@tonic-gate if (cflags & KMC_NODEBUG) 33457c478bd9Sstevel@tonic-gate cp->cache_flags &= ~KMF_DEBUG; 33467c478bd9Sstevel@tonic-gate 33477c478bd9Sstevel@tonic-gate if (cflags & KMC_NOTOUCH) 33487c478bd9Sstevel@tonic-gate cp->cache_flags &= ~KMF_TOUCH; 33497c478bd9Sstevel@tonic-gate 33507c478bd9Sstevel@tonic-gate if (cflags & KMC_NOHASH) 33517c478bd9Sstevel@tonic-gate cp->cache_flags &= ~(KMF_AUDIT | KMF_FIREWALL); 33527c478bd9Sstevel@tonic-gate 33537c478bd9Sstevel@tonic-gate if (cflags & KMC_NOMAGAZINE) 33547c478bd9Sstevel@tonic-gate cp->cache_flags |= KMF_NOMAGAZINE; 33557c478bd9Sstevel@tonic-gate 33567c478bd9Sstevel@tonic-gate if ((cp->cache_flags & KMF_AUDIT) && !(cflags & KMC_NOTOUCH)) 33577c478bd9Sstevel@tonic-gate cp->cache_flags |= KMF_REDZONE; 33587c478bd9Sstevel@tonic-gate 33597c478bd9Sstevel@tonic-gate if (!(cp->cache_flags & KMF_AUDIT)) 33607c478bd9Sstevel@tonic-gate cp->cache_flags &= ~KMF_CONTENTS; 33617c478bd9Sstevel@tonic-gate 33627c478bd9Sstevel@tonic-gate if ((cp->cache_flags & KMF_BUFTAG) && bufsize >= kmem_minfirewall && 33637c478bd9Sstevel@tonic-gate !(cp->cache_flags & KMF_LITE) && !(cflags & KMC_NOHASH)) 33647c478bd9Sstevel@tonic-gate cp->cache_flags |= KMF_FIREWALL; 33657c478bd9Sstevel@tonic-gate 33667c478bd9Sstevel@tonic-gate if (vmp != kmem_default_arena || kmem_firewall_arena == NULL) 33677c478bd9Sstevel@tonic-gate cp->cache_flags &= ~KMF_FIREWALL; 33687c478bd9Sstevel@tonic-gate 33697c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_FIREWALL) { 33707c478bd9Sstevel@tonic-gate cp->cache_flags &= ~KMF_BUFTAG; 33717c478bd9Sstevel@tonic-gate cp->cache_flags |= KMF_NOMAGAZINE; 33727c478bd9Sstevel@tonic-gate ASSERT(vmp == kmem_default_arena); 33737c478bd9Sstevel@tonic-gate vmp = kmem_firewall_arena; 33747c478bd9Sstevel@tonic-gate } 33757c478bd9Sstevel@tonic-gate 33767c478bd9Sstevel@tonic-gate /* 33777c478bd9Sstevel@tonic-gate * Set cache properties. 33787c478bd9Sstevel@tonic-gate */ 33797c478bd9Sstevel@tonic-gate (void) strncpy(cp->cache_name, name, KMEM_CACHE_NAMELEN); 3380b5fca8f8Stomee strident_canon(cp->cache_name, KMEM_CACHE_NAMELEN + 1); 33817c478bd9Sstevel@tonic-gate cp->cache_bufsize = bufsize; 33827c478bd9Sstevel@tonic-gate cp->cache_align = align; 33837c478bd9Sstevel@tonic-gate cp->cache_constructor = constructor; 33847c478bd9Sstevel@tonic-gate cp->cache_destructor = destructor; 33857c478bd9Sstevel@tonic-gate cp->cache_reclaim = reclaim; 33867c478bd9Sstevel@tonic-gate cp->cache_private = private; 33877c478bd9Sstevel@tonic-gate cp->cache_arena = vmp; 33887c478bd9Sstevel@tonic-gate cp->cache_cflags = cflags; 33897c478bd9Sstevel@tonic-gate 33907c478bd9Sstevel@tonic-gate /* 33917c478bd9Sstevel@tonic-gate * Determine the chunk size. 33927c478bd9Sstevel@tonic-gate */ 33937c478bd9Sstevel@tonic-gate chunksize = bufsize; 33947c478bd9Sstevel@tonic-gate 33957c478bd9Sstevel@tonic-gate if (align >= KMEM_ALIGN) { 33967c478bd9Sstevel@tonic-gate chunksize = P2ROUNDUP(chunksize, KMEM_ALIGN); 33977c478bd9Sstevel@tonic-gate cp->cache_bufctl = chunksize - KMEM_ALIGN; 33987c478bd9Sstevel@tonic-gate } 33997c478bd9Sstevel@tonic-gate 34007c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_BUFTAG) { 34017c478bd9Sstevel@tonic-gate cp->cache_bufctl = chunksize; 34027c478bd9Sstevel@tonic-gate cp->cache_buftag = chunksize; 34037c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_LITE) 34047c478bd9Sstevel@tonic-gate chunksize += KMEM_BUFTAG_LITE_SIZE(kmem_lite_count); 34057c478bd9Sstevel@tonic-gate else 34067c478bd9Sstevel@tonic-gate chunksize += sizeof (kmem_buftag_t); 34077c478bd9Sstevel@tonic-gate } 34087c478bd9Sstevel@tonic-gate 34097c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_DEADBEEF) { 34107c478bd9Sstevel@tonic-gate cp->cache_verify = MIN(cp->cache_buftag, kmem_maxverify); 34117c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_LITE) 34127c478bd9Sstevel@tonic-gate cp->cache_verify = sizeof (uint64_t); 34137c478bd9Sstevel@tonic-gate } 34147c478bd9Sstevel@tonic-gate 34157c478bd9Sstevel@tonic-gate cp->cache_contents = MIN(cp->cache_bufctl, kmem_content_maxsave); 34167c478bd9Sstevel@tonic-gate 34177c478bd9Sstevel@tonic-gate cp->cache_chunksize = chunksize = P2ROUNDUP(chunksize, align); 34187c478bd9Sstevel@tonic-gate 34197c478bd9Sstevel@tonic-gate /* 34207c478bd9Sstevel@tonic-gate * Now that we know the chunk size, determine the optimal slab size. 34217c478bd9Sstevel@tonic-gate */ 34227c478bd9Sstevel@tonic-gate if (vmp == kmem_firewall_arena) { 34237c478bd9Sstevel@tonic-gate cp->cache_slabsize = P2ROUNDUP(chunksize, vmp->vm_quantum); 34247c478bd9Sstevel@tonic-gate cp->cache_mincolor = cp->cache_slabsize - chunksize; 34257c478bd9Sstevel@tonic-gate cp->cache_maxcolor = cp->cache_mincolor; 34267c478bd9Sstevel@tonic-gate cp->cache_flags |= KMF_HASH; 34277c478bd9Sstevel@tonic-gate ASSERT(!(cp->cache_flags & KMF_BUFTAG)); 34287c478bd9Sstevel@tonic-gate } else if ((cflags & KMC_NOHASH) || (!(cflags & KMC_NOTOUCH) && 34297c478bd9Sstevel@tonic-gate !(cp->cache_flags & KMF_AUDIT) && 34307c478bd9Sstevel@tonic-gate chunksize < vmp->vm_quantum / KMEM_VOID_FRACTION)) { 34317c478bd9Sstevel@tonic-gate cp->cache_slabsize = vmp->vm_quantum; 34327c478bd9Sstevel@tonic-gate cp->cache_mincolor = 0; 34337c478bd9Sstevel@tonic-gate cp->cache_maxcolor = 34347c478bd9Sstevel@tonic-gate (cp->cache_slabsize - sizeof (kmem_slab_t)) % chunksize; 34357c478bd9Sstevel@tonic-gate ASSERT(chunksize + sizeof (kmem_slab_t) <= cp->cache_slabsize); 34367c478bd9Sstevel@tonic-gate ASSERT(!(cp->cache_flags & KMF_AUDIT)); 34377c478bd9Sstevel@tonic-gate } else { 34387c478bd9Sstevel@tonic-gate size_t chunks, bestfit, waste, slabsize; 34397c478bd9Sstevel@tonic-gate size_t minwaste = LONG_MAX; 34407c478bd9Sstevel@tonic-gate 34417c478bd9Sstevel@tonic-gate for (chunks = 1; chunks <= KMEM_VOID_FRACTION; chunks++) { 34427c478bd9Sstevel@tonic-gate slabsize = P2ROUNDUP(chunksize * chunks, 34437c478bd9Sstevel@tonic-gate vmp->vm_quantum); 34447c478bd9Sstevel@tonic-gate chunks = slabsize / chunksize; 34457c478bd9Sstevel@tonic-gate waste = (slabsize % chunksize) / chunks; 34467c478bd9Sstevel@tonic-gate if (waste < minwaste) { 34477c478bd9Sstevel@tonic-gate minwaste = waste; 34487c478bd9Sstevel@tonic-gate bestfit = slabsize; 34497c478bd9Sstevel@tonic-gate } 34507c478bd9Sstevel@tonic-gate } 34517c478bd9Sstevel@tonic-gate if (cflags & KMC_QCACHE) 34527c478bd9Sstevel@tonic-gate bestfit = VMEM_QCACHE_SLABSIZE(vmp->vm_qcache_max); 34537c478bd9Sstevel@tonic-gate cp->cache_slabsize = bestfit; 34547c478bd9Sstevel@tonic-gate cp->cache_mincolor = 0; 34557c478bd9Sstevel@tonic-gate cp->cache_maxcolor = bestfit % chunksize; 34567c478bd9Sstevel@tonic-gate cp->cache_flags |= KMF_HASH; 34577c478bd9Sstevel@tonic-gate } 34587c478bd9Sstevel@tonic-gate 3459b5fca8f8Stomee cp->cache_maxchunks = (cp->cache_slabsize / cp->cache_chunksize); 3460b5fca8f8Stomee cp->cache_partial_binshift = highbit(cp->cache_maxchunks / 16) + 1; 3461b5fca8f8Stomee 34627c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_HASH) { 34637c478bd9Sstevel@tonic-gate ASSERT(!(cflags & KMC_NOHASH)); 34647c478bd9Sstevel@tonic-gate cp->cache_bufctl_cache = (cp->cache_flags & KMF_AUDIT) ? 34657c478bd9Sstevel@tonic-gate kmem_bufctl_audit_cache : kmem_bufctl_cache; 34667c478bd9Sstevel@tonic-gate } 34677c478bd9Sstevel@tonic-gate 34687c478bd9Sstevel@tonic-gate if (cp->cache_maxcolor >= vmp->vm_quantum) 34697c478bd9Sstevel@tonic-gate cp->cache_maxcolor = vmp->vm_quantum - 1; 34707c478bd9Sstevel@tonic-gate 34717c478bd9Sstevel@tonic-gate cp->cache_color = cp->cache_mincolor; 34727c478bd9Sstevel@tonic-gate 34737c478bd9Sstevel@tonic-gate /* 34747c478bd9Sstevel@tonic-gate * Initialize the rest of the slab layer. 34757c478bd9Sstevel@tonic-gate */ 34767c478bd9Sstevel@tonic-gate mutex_init(&cp->cache_lock, NULL, MUTEX_DEFAULT, NULL); 34777c478bd9Sstevel@tonic-gate 3478b5fca8f8Stomee avl_create(&cp->cache_partial_slabs, kmem_partial_slab_cmp, 3479b5fca8f8Stomee sizeof (kmem_slab_t), offsetof(kmem_slab_t, slab_link)); 3480b5fca8f8Stomee /* LINTED: E_TRUE_LOGICAL_EXPR */ 3481b5fca8f8Stomee ASSERT(sizeof (list_node_t) <= sizeof (avl_node_t)); 3482b5fca8f8Stomee /* reuse partial slab AVL linkage for complete slab list linkage */ 3483b5fca8f8Stomee list_create(&cp->cache_complete_slabs, 3484b5fca8f8Stomee sizeof (kmem_slab_t), offsetof(kmem_slab_t, slab_link)); 34857c478bd9Sstevel@tonic-gate 34867c478bd9Sstevel@tonic-gate if (cp->cache_flags & KMF_HASH) { 34877c478bd9Sstevel@tonic-gate cp->cache_hash_table = vmem_alloc(kmem_hash_arena, 34887c478bd9Sstevel@tonic-gate KMEM_HASH_INITIAL * sizeof (void *), VM_SLEEP); 34897c478bd9Sstevel@tonic-gate bzero(cp->cache_hash_table, 34907c478bd9Sstevel@tonic-gate KMEM_HASH_INITIAL * sizeof (void *)); 34917c478bd9Sstevel@tonic-gate cp->cache_hash_mask = KMEM_HASH_INITIAL - 1; 34927c478bd9Sstevel@tonic-gate cp->cache_hash_shift = highbit((ulong_t)chunksize) - 1; 34937c478bd9Sstevel@tonic-gate } 34947c478bd9Sstevel@tonic-gate 34957c478bd9Sstevel@tonic-gate /* 34967c478bd9Sstevel@tonic-gate * Initialize the depot. 34977c478bd9Sstevel@tonic-gate */ 34987c478bd9Sstevel@tonic-gate mutex_init(&cp->cache_depot_lock, NULL, MUTEX_DEFAULT, NULL); 34997c478bd9Sstevel@tonic-gate 35007c478bd9Sstevel@tonic-gate for (mtp = kmem_magtype; chunksize <= mtp->mt_minbuf; mtp++) 35017c478bd9Sstevel@tonic-gate continue; 35027c478bd9Sstevel@tonic-gate 35037c478bd9Sstevel@tonic-gate cp->cache_magtype = mtp; 35047c478bd9Sstevel@tonic-gate 35057c478bd9Sstevel@tonic-gate /* 35067c478bd9Sstevel@tonic-gate * Initialize the CPU layer. 35077c478bd9Sstevel@tonic-gate */ 35087c478bd9Sstevel@tonic-gate for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 35097c478bd9Sstevel@tonic-gate kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid]; 35107c478bd9Sstevel@tonic-gate mutex_init(&ccp->cc_lock, NULL, MUTEX_DEFAULT, NULL); 35117c478bd9Sstevel@tonic-gate ccp->cc_flags = cp->cache_flags; 35127c478bd9Sstevel@tonic-gate ccp->cc_rounds = -1; 35137c478bd9Sstevel@tonic-gate ccp->cc_prounds = -1; 35147c478bd9Sstevel@tonic-gate } 35157c478bd9Sstevel@tonic-gate 35167c478bd9Sstevel@tonic-gate /* 35177c478bd9Sstevel@tonic-gate * Create the cache's kstats. 35187c478bd9Sstevel@tonic-gate */ 35197c478bd9Sstevel@tonic-gate if ((cp->cache_kstat = kstat_create("unix", 0, cp->cache_name, 35207c478bd9Sstevel@tonic-gate "kmem_cache", KSTAT_TYPE_NAMED, 35217c478bd9Sstevel@tonic-gate sizeof (kmem_cache_kstat) / sizeof (kstat_named_t), 35227c478bd9Sstevel@tonic-gate KSTAT_FLAG_VIRTUAL)) != NULL) { 35237c478bd9Sstevel@tonic-gate cp->cache_kstat->ks_data = &kmem_cache_kstat; 35247c478bd9Sstevel@tonic-gate cp->cache_kstat->ks_update = kmem_cache_kstat_update; 35257c478bd9Sstevel@tonic-gate cp->cache_kstat->ks_private = cp; 35267c478bd9Sstevel@tonic-gate cp->cache_kstat->ks_lock = &kmem_cache_kstat_lock; 35277c478bd9Sstevel@tonic-gate kstat_install(cp->cache_kstat); 35287c478bd9Sstevel@tonic-gate } 35297c478bd9Sstevel@tonic-gate 35307c478bd9Sstevel@tonic-gate /* 35317c478bd9Sstevel@tonic-gate * Add the cache to the global list. This makes it visible 35327c478bd9Sstevel@tonic-gate * to kmem_update(), so the cache must be ready for business. 35337c478bd9Sstevel@tonic-gate */ 35347c478bd9Sstevel@tonic-gate mutex_enter(&kmem_cache_lock); 3535b5fca8f8Stomee list_insert_tail(&kmem_caches, cp); 35367c478bd9Sstevel@tonic-gate mutex_exit(&kmem_cache_lock); 35377c478bd9Sstevel@tonic-gate 35387c478bd9Sstevel@tonic-gate if (kmem_ready) 35397c478bd9Sstevel@tonic-gate kmem_cache_magazine_enable(cp); 35407c478bd9Sstevel@tonic-gate 35417c478bd9Sstevel@tonic-gate return (cp); 35427c478bd9Sstevel@tonic-gate } 35437c478bd9Sstevel@tonic-gate 3544b5fca8f8Stomee static int 3545b5fca8f8Stomee kmem_move_cmp(const void *buf, const void *p) 3546b5fca8f8Stomee { 3547b5fca8f8Stomee const kmem_move_t *kmm = p; 3548b5fca8f8Stomee uintptr_t v1 = (uintptr_t)buf; 3549b5fca8f8Stomee uintptr_t v2 = (uintptr_t)kmm->kmm_from_buf; 3550b5fca8f8Stomee return (v1 < v2 ? -1 : (v1 > v2 ? 1 : 0)); 3551b5fca8f8Stomee } 3552b5fca8f8Stomee 3553b5fca8f8Stomee static void 3554b5fca8f8Stomee kmem_reset_reclaim_threshold(kmem_defrag_t *kmd) 3555b5fca8f8Stomee { 3556b5fca8f8Stomee kmd->kmd_reclaim_numer = 1; 3557b5fca8f8Stomee } 3558b5fca8f8Stomee 3559b5fca8f8Stomee /* 3560b5fca8f8Stomee * Initially, when choosing candidate slabs for buffers to move, we want to be 3561b5fca8f8Stomee * very selective and take only slabs that are less than 3562b5fca8f8Stomee * (1 / KMEM_VOID_FRACTION) allocated. If we have difficulty finding candidate 3563b5fca8f8Stomee * slabs, then we raise the allocation ceiling incrementally. The reclaim 3564b5fca8f8Stomee * threshold is reset to (1 / KMEM_VOID_FRACTION) as soon as the cache is no 3565b5fca8f8Stomee * longer fragmented. 3566b5fca8f8Stomee */ 3567b5fca8f8Stomee static void 3568b5fca8f8Stomee kmem_adjust_reclaim_threshold(kmem_defrag_t *kmd, int direction) 3569b5fca8f8Stomee { 3570b5fca8f8Stomee if (direction > 0) { 3571b5fca8f8Stomee /* make it easier to find a candidate slab */ 3572b5fca8f8Stomee if (kmd->kmd_reclaim_numer < (KMEM_VOID_FRACTION - 1)) { 3573b5fca8f8Stomee kmd->kmd_reclaim_numer++; 3574b5fca8f8Stomee } 3575b5fca8f8Stomee } else { 3576b5fca8f8Stomee /* be more selective */ 3577b5fca8f8Stomee if (kmd->kmd_reclaim_numer > 1) { 3578b5fca8f8Stomee kmd->kmd_reclaim_numer--; 3579b5fca8f8Stomee } 3580b5fca8f8Stomee } 3581b5fca8f8Stomee } 3582b5fca8f8Stomee 3583b5fca8f8Stomee void 3584b5fca8f8Stomee kmem_cache_set_move(kmem_cache_t *cp, 3585b5fca8f8Stomee kmem_cbrc_t (*move)(void *, void *, size_t, void *)) 3586b5fca8f8Stomee { 3587b5fca8f8Stomee kmem_defrag_t *defrag; 3588b5fca8f8Stomee 3589b5fca8f8Stomee ASSERT(move != NULL); 3590b5fca8f8Stomee /* 3591b5fca8f8Stomee * The consolidator does not support NOTOUCH caches because kmem cannot 3592b5fca8f8Stomee * initialize their slabs with the 0xbaddcafe memory pattern, which sets 3593b5fca8f8Stomee * a low order bit usable by clients to distinguish uninitialized memory 3594b5fca8f8Stomee * from known objects (see kmem_slab_create). 3595b5fca8f8Stomee */ 3596b5fca8f8Stomee ASSERT(!(cp->cache_cflags & KMC_NOTOUCH)); 3597b5fca8f8Stomee ASSERT(!(cp->cache_cflags & KMC_IDENTIFIER)); 3598b5fca8f8Stomee 3599b5fca8f8Stomee /* 3600b5fca8f8Stomee * We should not be holding anyone's cache lock when calling 3601b5fca8f8Stomee * kmem_cache_alloc(), so allocate in all cases before acquiring the 3602b5fca8f8Stomee * lock. 3603b5fca8f8Stomee */ 3604b5fca8f8Stomee defrag = kmem_cache_alloc(kmem_defrag_cache, KM_SLEEP); 3605b5fca8f8Stomee 3606b5fca8f8Stomee mutex_enter(&cp->cache_lock); 3607b5fca8f8Stomee 3608b5fca8f8Stomee if (KMEM_IS_MOVABLE(cp)) { 3609b5fca8f8Stomee if (cp->cache_move == NULL) { 36104d4c4c43STom Erickson ASSERT(cp->cache_slab_alloc == 0); 3611b5fca8f8Stomee 3612b5fca8f8Stomee cp->cache_defrag = defrag; 3613b5fca8f8Stomee defrag = NULL; /* nothing to free */ 3614b5fca8f8Stomee bzero(cp->cache_defrag, sizeof (kmem_defrag_t)); 3615b5fca8f8Stomee avl_create(&cp->cache_defrag->kmd_moves_pending, 3616b5fca8f8Stomee kmem_move_cmp, sizeof (kmem_move_t), 3617b5fca8f8Stomee offsetof(kmem_move_t, kmm_entry)); 3618b5fca8f8Stomee /* LINTED: E_TRUE_LOGICAL_EXPR */ 3619b5fca8f8Stomee ASSERT(sizeof (list_node_t) <= sizeof (avl_node_t)); 3620b5fca8f8Stomee /* reuse the slab's AVL linkage for deadlist linkage */ 3621b5fca8f8Stomee list_create(&cp->cache_defrag->kmd_deadlist, 3622b5fca8f8Stomee sizeof (kmem_slab_t), 3623b5fca8f8Stomee offsetof(kmem_slab_t, slab_link)); 3624b5fca8f8Stomee kmem_reset_reclaim_threshold(cp->cache_defrag); 3625b5fca8f8Stomee } 3626b5fca8f8Stomee cp->cache_move = move; 3627b5fca8f8Stomee } 3628b5fca8f8Stomee 3629b5fca8f8Stomee mutex_exit(&cp->cache_lock); 3630b5fca8f8Stomee 3631b5fca8f8Stomee if (defrag != NULL) { 3632b5fca8f8Stomee kmem_cache_free(kmem_defrag_cache, defrag); /* unused */ 3633b5fca8f8Stomee } 3634b5fca8f8Stomee } 3635b5fca8f8Stomee 36367c478bd9Sstevel@tonic-gate void 36377c478bd9Sstevel@tonic-gate kmem_cache_destroy(kmem_cache_t *cp) 36387c478bd9Sstevel@tonic-gate { 36397c478bd9Sstevel@tonic-gate int cpu_seqid; 36407c478bd9Sstevel@tonic-gate 36417c478bd9Sstevel@tonic-gate /* 36427c478bd9Sstevel@tonic-gate * Remove the cache from the global cache list so that no one else 36437c478bd9Sstevel@tonic-gate * can schedule tasks on its behalf, wait for any pending tasks to 36447c478bd9Sstevel@tonic-gate * complete, purge the cache, and then destroy it. 36457c478bd9Sstevel@tonic-gate */ 36467c478bd9Sstevel@tonic-gate mutex_enter(&kmem_cache_lock); 3647b5fca8f8Stomee list_remove(&kmem_caches, cp); 36487c478bd9Sstevel@tonic-gate mutex_exit(&kmem_cache_lock); 36497c478bd9Sstevel@tonic-gate 36507c478bd9Sstevel@tonic-gate if (kmem_taskq != NULL) 36517c478bd9Sstevel@tonic-gate taskq_wait(kmem_taskq); 3652b5fca8f8Stomee if (kmem_move_taskq != NULL) 3653b5fca8f8Stomee taskq_wait(kmem_move_taskq); 36547c478bd9Sstevel@tonic-gate 36557c478bd9Sstevel@tonic-gate kmem_cache_magazine_purge(cp); 36567c478bd9Sstevel@tonic-gate 36577c478bd9Sstevel@tonic-gate mutex_enter(&cp->cache_lock); 36587c478bd9Sstevel@tonic-gate if (cp->cache_buftotal != 0) 36597c478bd9Sstevel@tonic-gate cmn_err(CE_WARN, "kmem_cache_destroy: '%s' (%p) not empty", 36607c478bd9Sstevel@tonic-gate cp->cache_name, (void *)cp); 3661b5fca8f8Stomee if (cp->cache_defrag != NULL) { 3662b5fca8f8Stomee avl_destroy(&cp->cache_defrag->kmd_moves_pending); 3663b5fca8f8Stomee list_destroy(&cp->cache_defrag->kmd_deadlist); 3664b5fca8f8Stomee kmem_cache_free(kmem_defrag_cache, cp->cache_defrag); 3665b5fca8f8Stomee cp->cache_defrag = NULL; 3666b5fca8f8Stomee } 36677c478bd9Sstevel@tonic-gate /* 3668b5fca8f8Stomee * The cache is now dead. There should be no further activity. We 3669b5fca8f8Stomee * enforce this by setting land mines in the constructor, destructor, 3670b5fca8f8Stomee * reclaim, and move routines that induce a kernel text fault if 3671b5fca8f8Stomee * invoked. 36727c478bd9Sstevel@tonic-gate */ 36737c478bd9Sstevel@tonic-gate cp->cache_constructor = (int (*)(void *, void *, int))1; 36747c478bd9Sstevel@tonic-gate cp->cache_destructor = (void (*)(void *, void *))2; 3675b5fca8f8Stomee cp->cache_reclaim = (void (*)(void *))3; 3676b5fca8f8Stomee cp->cache_move = (kmem_cbrc_t (*)(void *, void *, size_t, void *))4; 36777c478bd9Sstevel@tonic-gate mutex_exit(&cp->cache_lock); 36787c478bd9Sstevel@tonic-gate 36797c478bd9Sstevel@tonic-gate kstat_delete(cp->cache_kstat); 36807c478bd9Sstevel@tonic-gate 36817c478bd9Sstevel@tonic-gate if (cp->cache_hash_table != NULL) 36827c478bd9Sstevel@tonic-gate vmem_free(kmem_hash_arena, cp->cache_hash_table, 36837c478bd9Sstevel@tonic-gate (cp->cache_hash_mask + 1) * sizeof (void *)); 36847c478bd9Sstevel@tonic-gate 36857c478bd9Sstevel@tonic-gate for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) 36867c478bd9Sstevel@tonic-gate mutex_destroy(&cp->cache_cpu[cpu_seqid].cc_lock); 36877c478bd9Sstevel@tonic-gate 36887c478bd9Sstevel@tonic-gate mutex_destroy(&cp->cache_depot_lock); 36897c478bd9Sstevel@tonic-gate mutex_destroy(&cp->cache_lock); 36907c478bd9Sstevel@tonic-gate 36917c478bd9Sstevel@tonic-gate vmem_free(kmem_cache_arena, cp, KMEM_CACHE_SIZE(max_ncpus)); 36927c478bd9Sstevel@tonic-gate } 36937c478bd9Sstevel@tonic-gate 36947c478bd9Sstevel@tonic-gate /*ARGSUSED*/ 36957c478bd9Sstevel@tonic-gate static int 36967c478bd9Sstevel@tonic-gate kmem_cpu_setup(cpu_setup_t what, int id, void *arg) 36977c478bd9Sstevel@tonic-gate { 36987c478bd9Sstevel@tonic-gate ASSERT(MUTEX_HELD(&cpu_lock)); 36997c478bd9Sstevel@tonic-gate if (what == CPU_UNCONFIG) { 37007c478bd9Sstevel@tonic-gate kmem_cache_applyall(kmem_cache_magazine_purge, 37017c478bd9Sstevel@tonic-gate kmem_taskq, TQ_SLEEP); 37027c478bd9Sstevel@tonic-gate kmem_cache_applyall(kmem_cache_magazine_enable, 37037c478bd9Sstevel@tonic-gate kmem_taskq, TQ_SLEEP); 37047c478bd9Sstevel@tonic-gate } 37057c478bd9Sstevel@tonic-gate return (0); 37067c478bd9Sstevel@tonic-gate } 37077c478bd9Sstevel@tonic-gate 3708dce01e3fSJonathan W Adams static void 3709dce01e3fSJonathan W Adams kmem_alloc_caches_create(const int *array, size_t count, 3710dce01e3fSJonathan W Adams kmem_cache_t **alloc_table, size_t maxbuf, uint_t shift) 3711dce01e3fSJonathan W Adams { 3712dce01e3fSJonathan W Adams char name[KMEM_CACHE_NAMELEN + 1]; 3713dce01e3fSJonathan W Adams size_t table_unit = (1 << shift); /* range of one alloc_table entry */ 3714dce01e3fSJonathan W Adams size_t size = table_unit; 3715dce01e3fSJonathan W Adams int i; 3716dce01e3fSJonathan W Adams 3717dce01e3fSJonathan W Adams for (i = 0; i < count; i++) { 3718dce01e3fSJonathan W Adams size_t cache_size = array[i]; 3719dce01e3fSJonathan W Adams size_t align = KMEM_ALIGN; 3720dce01e3fSJonathan W Adams kmem_cache_t *cp; 3721dce01e3fSJonathan W Adams 3722dce01e3fSJonathan W Adams /* if the table has an entry for maxbuf, we're done */ 3723dce01e3fSJonathan W Adams if (size > maxbuf) 3724dce01e3fSJonathan W Adams break; 3725dce01e3fSJonathan W Adams 3726dce01e3fSJonathan W Adams /* cache size must be a multiple of the table unit */ 3727dce01e3fSJonathan W Adams ASSERT(P2PHASE(cache_size, table_unit) == 0); 3728dce01e3fSJonathan W Adams 3729dce01e3fSJonathan W Adams /* 3730dce01e3fSJonathan W Adams * If they allocate a multiple of the coherency granularity, 3731dce01e3fSJonathan W Adams * they get a coherency-granularity-aligned address. 3732dce01e3fSJonathan W Adams */ 3733dce01e3fSJonathan W Adams if (IS_P2ALIGNED(cache_size, 64)) 3734dce01e3fSJonathan W Adams align = 64; 3735dce01e3fSJonathan W Adams if (IS_P2ALIGNED(cache_size, PAGESIZE)) 3736dce01e3fSJonathan W Adams align = PAGESIZE; 3737dce01e3fSJonathan W Adams (void) snprintf(name, sizeof (name), 3738dce01e3fSJonathan W Adams "kmem_alloc_%lu", cache_size); 3739dce01e3fSJonathan W Adams cp = kmem_cache_create(name, cache_size, align, 3740dce01e3fSJonathan W Adams NULL, NULL, NULL, NULL, NULL, KMC_KMEM_ALLOC); 3741dce01e3fSJonathan W Adams 3742dce01e3fSJonathan W Adams while (size <= cache_size) { 3743dce01e3fSJonathan W Adams alloc_table[(size - 1) >> shift] = cp; 3744dce01e3fSJonathan W Adams size += table_unit; 3745dce01e3fSJonathan W Adams } 3746dce01e3fSJonathan W Adams } 3747dce01e3fSJonathan W Adams 3748dce01e3fSJonathan W Adams ASSERT(size > maxbuf); /* i.e. maxbuf <= max(cache_size) */ 3749dce01e3fSJonathan W Adams } 3750dce01e3fSJonathan W Adams 37517c478bd9Sstevel@tonic-gate static void 37527c478bd9Sstevel@tonic-gate kmem_cache_init(int pass, int use_large_pages) 37537c478bd9Sstevel@tonic-gate { 37547c478bd9Sstevel@tonic-gate int i; 3755dce01e3fSJonathan W Adams size_t maxbuf; 37567c478bd9Sstevel@tonic-gate kmem_magtype_t *mtp; 37577c478bd9Sstevel@tonic-gate 37587c478bd9Sstevel@tonic-gate for (i = 0; i < sizeof (kmem_magtype) / sizeof (*mtp); i++) { 3759dce01e3fSJonathan W Adams char name[KMEM_CACHE_NAMELEN + 1]; 3760dce01e3fSJonathan W Adams 37617c478bd9Sstevel@tonic-gate mtp = &kmem_magtype[i]; 37627c478bd9Sstevel@tonic-gate (void) sprintf(name, "kmem_magazine_%d", mtp->mt_magsize); 37637c478bd9Sstevel@tonic-gate mtp->mt_cache = kmem_cache_create(name, 37647c478bd9Sstevel@tonic-gate (mtp->mt_magsize + 1) * sizeof (void *), 37657c478bd9Sstevel@tonic-gate mtp->mt_align, NULL, NULL, NULL, NULL, 37667c478bd9Sstevel@tonic-gate kmem_msb_arena, KMC_NOHASH); 37677c478bd9Sstevel@tonic-gate } 37687c478bd9Sstevel@tonic-gate 37697c478bd9Sstevel@tonic-gate kmem_slab_cache = kmem_cache_create("kmem_slab_cache", 37707c478bd9Sstevel@tonic-gate sizeof (kmem_slab_t), 0, NULL, NULL, NULL, NULL, 37717c478bd9Sstevel@tonic-gate kmem_msb_arena, KMC_NOHASH); 37727c478bd9Sstevel@tonic-gate 37737c478bd9Sstevel@tonic-gate kmem_bufctl_cache = kmem_cache_create("kmem_bufctl_cache", 37747c478bd9Sstevel@tonic-gate sizeof (kmem_bufctl_t), 0, NULL, NULL, NULL, NULL, 37757c478bd9Sstevel@tonic-gate kmem_msb_arena, KMC_NOHASH); 37767c478bd9Sstevel@tonic-gate 37777c478bd9Sstevel@tonic-gate kmem_bufctl_audit_cache = kmem_cache_create("kmem_bufctl_audit_cache", 37787c478bd9Sstevel@tonic-gate sizeof (kmem_bufctl_audit_t), 0, NULL, NULL, NULL, NULL, 37797c478bd9Sstevel@tonic-gate kmem_msb_arena, KMC_NOHASH); 37807c478bd9Sstevel@tonic-gate 37817c478bd9Sstevel@tonic-gate if (pass == 2) { 37827c478bd9Sstevel@tonic-gate kmem_va_arena = vmem_create("kmem_va", 37837c478bd9Sstevel@tonic-gate NULL, 0, PAGESIZE, 37847c478bd9Sstevel@tonic-gate vmem_alloc, vmem_free, heap_arena, 37857c478bd9Sstevel@tonic-gate 8 * PAGESIZE, VM_SLEEP); 37867c478bd9Sstevel@tonic-gate 37877c478bd9Sstevel@tonic-gate if (use_large_pages) { 37887c478bd9Sstevel@tonic-gate kmem_default_arena = vmem_xcreate("kmem_default", 37897c478bd9Sstevel@tonic-gate NULL, 0, PAGESIZE, 37907c478bd9Sstevel@tonic-gate segkmem_alloc_lp, segkmem_free_lp, kmem_va_arena, 37917c478bd9Sstevel@tonic-gate 0, VM_SLEEP); 37927c478bd9Sstevel@tonic-gate } else { 37937c478bd9Sstevel@tonic-gate kmem_default_arena = vmem_create("kmem_default", 37947c478bd9Sstevel@tonic-gate NULL, 0, PAGESIZE, 37957c478bd9Sstevel@tonic-gate segkmem_alloc, segkmem_free, kmem_va_arena, 37967c478bd9Sstevel@tonic-gate 0, VM_SLEEP); 37977c478bd9Sstevel@tonic-gate } 3798dce01e3fSJonathan W Adams 3799dce01e3fSJonathan W Adams /* Figure out what our maximum cache size is */ 3800dce01e3fSJonathan W Adams maxbuf = kmem_max_cached; 3801dce01e3fSJonathan W Adams if (maxbuf <= KMEM_MAXBUF) { 3802dce01e3fSJonathan W Adams maxbuf = 0; 3803dce01e3fSJonathan W Adams kmem_max_cached = KMEM_MAXBUF; 3804dce01e3fSJonathan W Adams } else { 3805dce01e3fSJonathan W Adams size_t size = 0; 3806dce01e3fSJonathan W Adams size_t max = 3807dce01e3fSJonathan W Adams sizeof (kmem_big_alloc_sizes) / sizeof (int); 3808dce01e3fSJonathan W Adams /* 3809dce01e3fSJonathan W Adams * Round maxbuf up to an existing cache size. If maxbuf 3810dce01e3fSJonathan W Adams * is larger than the largest cache, we truncate it to 3811dce01e3fSJonathan W Adams * the largest cache's size. 3812dce01e3fSJonathan W Adams */ 3813dce01e3fSJonathan W Adams for (i = 0; i < max; i++) { 3814dce01e3fSJonathan W Adams size = kmem_big_alloc_sizes[i]; 3815dce01e3fSJonathan W Adams if (maxbuf <= size) 3816dce01e3fSJonathan W Adams break; 3817dce01e3fSJonathan W Adams } 3818dce01e3fSJonathan W Adams kmem_max_cached = maxbuf = size; 3819dce01e3fSJonathan W Adams } 3820dce01e3fSJonathan W Adams 3821dce01e3fSJonathan W Adams /* 3822dce01e3fSJonathan W Adams * The big alloc table may not be completely overwritten, so 3823dce01e3fSJonathan W Adams * we clear out any stale cache pointers from the first pass. 3824dce01e3fSJonathan W Adams */ 3825dce01e3fSJonathan W Adams bzero(kmem_big_alloc_table, sizeof (kmem_big_alloc_table)); 38267c478bd9Sstevel@tonic-gate } else { 38277c478bd9Sstevel@tonic-gate /* 38287c478bd9Sstevel@tonic-gate * During the first pass, the kmem_alloc_* caches 38297c478bd9Sstevel@tonic-gate * are treated as metadata. 38307c478bd9Sstevel@tonic-gate */ 38317c478bd9Sstevel@tonic-gate kmem_default_arena = kmem_msb_arena; 3832dce01e3fSJonathan W Adams maxbuf = KMEM_BIG_MAXBUF_32BIT; 38337c478bd9Sstevel@tonic-gate } 38347c478bd9Sstevel@tonic-gate 38357c478bd9Sstevel@tonic-gate /* 38367c478bd9Sstevel@tonic-gate * Set up the default caches to back kmem_alloc() 38377c478bd9Sstevel@tonic-gate */ 3838dce01e3fSJonathan W Adams kmem_alloc_caches_create( 3839dce01e3fSJonathan W Adams kmem_alloc_sizes, sizeof (kmem_alloc_sizes) / sizeof (int), 3840dce01e3fSJonathan W Adams kmem_alloc_table, KMEM_MAXBUF, KMEM_ALIGN_SHIFT); 3841dce01e3fSJonathan W Adams 3842dce01e3fSJonathan W Adams kmem_alloc_caches_create( 3843dce01e3fSJonathan W Adams kmem_big_alloc_sizes, sizeof (kmem_big_alloc_sizes) / sizeof (int), 3844dce01e3fSJonathan W Adams kmem_big_alloc_table, maxbuf, KMEM_BIG_SHIFT); 3845dce01e3fSJonathan W Adams 3846dce01e3fSJonathan W Adams kmem_big_alloc_table_max = maxbuf >> KMEM_BIG_SHIFT; 38477c478bd9Sstevel@tonic-gate } 38487c478bd9Sstevel@tonic-gate 38497c478bd9Sstevel@tonic-gate void 38507c478bd9Sstevel@tonic-gate kmem_init(void) 38517c478bd9Sstevel@tonic-gate { 38527c478bd9Sstevel@tonic-gate kmem_cache_t *cp; 38537c478bd9Sstevel@tonic-gate int old_kmem_flags = kmem_flags; 38547c478bd9Sstevel@tonic-gate int use_large_pages = 0; 38557c478bd9Sstevel@tonic-gate size_t maxverify, minfirewall; 38567c478bd9Sstevel@tonic-gate 38577c478bd9Sstevel@tonic-gate kstat_init(); 38587c478bd9Sstevel@tonic-gate 38597c478bd9Sstevel@tonic-gate /* 38607c478bd9Sstevel@tonic-gate * Small-memory systems (< 24 MB) can't handle kmem_flags overhead. 38617c478bd9Sstevel@tonic-gate */ 38627c478bd9Sstevel@tonic-gate if (physmem < btop(24 << 20) && !(old_kmem_flags & KMF_STICKY)) 38637c478bd9Sstevel@tonic-gate kmem_flags = 0; 38647c478bd9Sstevel@tonic-gate 38657c478bd9Sstevel@tonic-gate /* 38667c478bd9Sstevel@tonic-gate * Don't do firewalled allocations if the heap is less than 1TB 38677c478bd9Sstevel@tonic-gate * (i.e. on a 32-bit kernel) 38687c478bd9Sstevel@tonic-gate * The resulting VM_NEXTFIT allocations would create too much 38697c478bd9Sstevel@tonic-gate * fragmentation in a small heap. 38707c478bd9Sstevel@tonic-gate */ 38717c478bd9Sstevel@tonic-gate #if defined(_LP64) 38727c478bd9Sstevel@tonic-gate maxverify = minfirewall = PAGESIZE / 2; 38737c478bd9Sstevel@tonic-gate #else 38747c478bd9Sstevel@tonic-gate maxverify = minfirewall = ULONG_MAX; 38757c478bd9Sstevel@tonic-gate #endif 38767c478bd9Sstevel@tonic-gate 38777c478bd9Sstevel@tonic-gate /* LINTED */ 38787c478bd9Sstevel@tonic-gate ASSERT(sizeof (kmem_cpu_cache_t) == KMEM_CPU_CACHE_SIZE); 38797c478bd9Sstevel@tonic-gate 3880b5fca8f8Stomee list_create(&kmem_caches, sizeof (kmem_cache_t), 3881b5fca8f8Stomee offsetof(kmem_cache_t, cache_link)); 38827c478bd9Sstevel@tonic-gate 38837c478bd9Sstevel@tonic-gate kmem_metadata_arena = vmem_create("kmem_metadata", NULL, 0, PAGESIZE, 38847c478bd9Sstevel@tonic-gate vmem_alloc, vmem_free, heap_arena, 8 * PAGESIZE, 38857c478bd9Sstevel@tonic-gate VM_SLEEP | VMC_NO_QCACHE); 38867c478bd9Sstevel@tonic-gate 38877c478bd9Sstevel@tonic-gate kmem_msb_arena = vmem_create("kmem_msb", NULL, 0, 38887c478bd9Sstevel@tonic-gate PAGESIZE, segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, 38897c478bd9Sstevel@tonic-gate VM_SLEEP); 38907c478bd9Sstevel@tonic-gate 38917c478bd9Sstevel@tonic-gate kmem_cache_arena = vmem_create("kmem_cache", NULL, 0, KMEM_ALIGN, 38927c478bd9Sstevel@tonic-gate segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, VM_SLEEP); 38937c478bd9Sstevel@tonic-gate 38947c478bd9Sstevel@tonic-gate kmem_hash_arena = vmem_create("kmem_hash", NULL, 0, KMEM_ALIGN, 38957c478bd9Sstevel@tonic-gate segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, VM_SLEEP); 38967c478bd9Sstevel@tonic-gate 38977c478bd9Sstevel@tonic-gate kmem_log_arena = vmem_create("kmem_log", NULL, 0, KMEM_ALIGN, 38987c478bd9Sstevel@tonic-gate segkmem_alloc, segkmem_free, heap_arena, 0, VM_SLEEP); 38997c478bd9Sstevel@tonic-gate 39007c478bd9Sstevel@tonic-gate kmem_firewall_va_arena = vmem_create("kmem_firewall_va", 39017c478bd9Sstevel@tonic-gate NULL, 0, PAGESIZE, 39027c478bd9Sstevel@tonic-gate kmem_firewall_va_alloc, kmem_firewall_va_free, heap_arena, 39037c478bd9Sstevel@tonic-gate 0, VM_SLEEP); 39047c478bd9Sstevel@tonic-gate 39057c478bd9Sstevel@tonic-gate kmem_firewall_arena = vmem_create("kmem_firewall", NULL, 0, PAGESIZE, 39067c478bd9Sstevel@tonic-gate segkmem_alloc, segkmem_free, kmem_firewall_va_arena, 0, VM_SLEEP); 39077c478bd9Sstevel@tonic-gate 39087c478bd9Sstevel@tonic-gate /* temporary oversize arena for mod_read_system_file */ 39097c478bd9Sstevel@tonic-gate kmem_oversize_arena = vmem_create("kmem_oversize", NULL, 0, PAGESIZE, 39107c478bd9Sstevel@tonic-gate segkmem_alloc, segkmem_free, heap_arena, 0, VM_SLEEP); 39117c478bd9Sstevel@tonic-gate 39127c478bd9Sstevel@tonic-gate kmem_reap_interval = 15 * hz; 39137c478bd9Sstevel@tonic-gate 39147c478bd9Sstevel@tonic-gate /* 39157c478bd9Sstevel@tonic-gate * Read /etc/system. This is a chicken-and-egg problem because 39167c478bd9Sstevel@tonic-gate * kmem_flags may be set in /etc/system, but mod_read_system_file() 39177c478bd9Sstevel@tonic-gate * needs to use the allocator. The simplest solution is to create 39187c478bd9Sstevel@tonic-gate * all the standard kmem caches, read /etc/system, destroy all the 39197c478bd9Sstevel@tonic-gate * caches we just created, and then create them all again in light 39207c478bd9Sstevel@tonic-gate * of the (possibly) new kmem_flags and other kmem tunables. 39217c478bd9Sstevel@tonic-gate */ 39227c478bd9Sstevel@tonic-gate kmem_cache_init(1, 0); 39237c478bd9Sstevel@tonic-gate 39247c478bd9Sstevel@tonic-gate mod_read_system_file(boothowto & RB_ASKNAME); 39257c478bd9Sstevel@tonic-gate 3926b5fca8f8Stomee while ((cp = list_tail(&kmem_caches)) != NULL) 39277c478bd9Sstevel@tonic-gate kmem_cache_destroy(cp); 39287c478bd9Sstevel@tonic-gate 39297c478bd9Sstevel@tonic-gate vmem_destroy(kmem_oversize_arena); 39307c478bd9Sstevel@tonic-gate 39317c478bd9Sstevel@tonic-gate if (old_kmem_flags & KMF_STICKY) 39327c478bd9Sstevel@tonic-gate kmem_flags = old_kmem_flags; 39337c478bd9Sstevel@tonic-gate 39347c478bd9Sstevel@tonic-gate if (!(kmem_flags & KMF_AUDIT)) 39357c478bd9Sstevel@tonic-gate vmem_seg_size = offsetof(vmem_seg_t, vs_thread); 39367c478bd9Sstevel@tonic-gate 39377c478bd9Sstevel@tonic-gate if (kmem_maxverify == 0) 39387c478bd9Sstevel@tonic-gate kmem_maxverify = maxverify; 39397c478bd9Sstevel@tonic-gate 39407c478bd9Sstevel@tonic-gate if (kmem_minfirewall == 0) 39417c478bd9Sstevel@tonic-gate kmem_minfirewall = minfirewall; 39427c478bd9Sstevel@tonic-gate 39437c478bd9Sstevel@tonic-gate /* 39447c478bd9Sstevel@tonic-gate * give segkmem a chance to figure out if we are using large pages 39457c478bd9Sstevel@tonic-gate * for the kernel heap 39467c478bd9Sstevel@tonic-gate */ 39477c478bd9Sstevel@tonic-gate use_large_pages = segkmem_lpsetup(); 39487c478bd9Sstevel@tonic-gate 39497c478bd9Sstevel@tonic-gate /* 39507c478bd9Sstevel@tonic-gate * To protect against corruption, we keep the actual number of callers 39517c478bd9Sstevel@tonic-gate * KMF_LITE records seperate from the tunable. We arbitrarily clamp 39527c478bd9Sstevel@tonic-gate * to 16, since the overhead for small buffers quickly gets out of 39537c478bd9Sstevel@tonic-gate * hand. 39547c478bd9Sstevel@tonic-gate * 39557c478bd9Sstevel@tonic-gate * The real limit would depend on the needs of the largest KMC_NOHASH 39567c478bd9Sstevel@tonic-gate * cache. 39577c478bd9Sstevel@tonic-gate */ 39587c478bd9Sstevel@tonic-gate kmem_lite_count = MIN(MAX(0, kmem_lite_pcs), 16); 39597c478bd9Sstevel@tonic-gate kmem_lite_pcs = kmem_lite_count; 39607c478bd9Sstevel@tonic-gate 39617c478bd9Sstevel@tonic-gate /* 39627c478bd9Sstevel@tonic-gate * Normally, we firewall oversized allocations when possible, but 39637c478bd9Sstevel@tonic-gate * if we are using large pages for kernel memory, and we don't have 39647c478bd9Sstevel@tonic-gate * any non-LITE debugging flags set, we want to allocate oversized 39657c478bd9Sstevel@tonic-gate * buffers from large pages, and so skip the firewalling. 39667c478bd9Sstevel@tonic-gate */ 39677c478bd9Sstevel@tonic-gate if (use_large_pages && 39687c478bd9Sstevel@tonic-gate ((kmem_flags & KMF_LITE) || !(kmem_flags & KMF_DEBUG))) { 39697c478bd9Sstevel@tonic-gate kmem_oversize_arena = vmem_xcreate("kmem_oversize", NULL, 0, 39707c478bd9Sstevel@tonic-gate PAGESIZE, segkmem_alloc_lp, segkmem_free_lp, heap_arena, 39717c478bd9Sstevel@tonic-gate 0, VM_SLEEP); 39727c478bd9Sstevel@tonic-gate } else { 39737c478bd9Sstevel@tonic-gate kmem_oversize_arena = vmem_create("kmem_oversize", 39747c478bd9Sstevel@tonic-gate NULL, 0, PAGESIZE, 39757c478bd9Sstevel@tonic-gate segkmem_alloc, segkmem_free, kmem_minfirewall < ULONG_MAX? 39767c478bd9Sstevel@tonic-gate kmem_firewall_va_arena : heap_arena, 0, VM_SLEEP); 39777c478bd9Sstevel@tonic-gate } 39787c478bd9Sstevel@tonic-gate 39797c478bd9Sstevel@tonic-gate kmem_cache_init(2, use_large_pages); 39807c478bd9Sstevel@tonic-gate 39817c478bd9Sstevel@tonic-gate if (kmem_flags & (KMF_AUDIT | KMF_RANDOMIZE)) { 39827c478bd9Sstevel@tonic-gate if (kmem_transaction_log_size == 0) 39837c478bd9Sstevel@tonic-gate kmem_transaction_log_size = kmem_maxavail() / 50; 39847c478bd9Sstevel@tonic-gate kmem_transaction_log = kmem_log_init(kmem_transaction_log_size); 39857c478bd9Sstevel@tonic-gate } 39867c478bd9Sstevel@tonic-gate 39877c478bd9Sstevel@tonic-gate if (kmem_flags & (KMF_CONTENTS | KMF_RANDOMIZE)) { 39887c478bd9Sstevel@tonic-gate if (kmem_content_log_size == 0) 39897c478bd9Sstevel@tonic-gate kmem_content_log_size = kmem_maxavail() / 50; 39907c478bd9Sstevel@tonic-gate kmem_content_log = kmem_log_init(kmem_content_log_size); 39917c478bd9Sstevel@tonic-gate } 39927c478bd9Sstevel@tonic-gate 39937c478bd9Sstevel@tonic-gate kmem_failure_log = kmem_log_init(kmem_failure_log_size); 39947c478bd9Sstevel@tonic-gate 39957c478bd9Sstevel@tonic-gate kmem_slab_log = kmem_log_init(kmem_slab_log_size); 39967c478bd9Sstevel@tonic-gate 39977c478bd9Sstevel@tonic-gate /* 39987c478bd9Sstevel@tonic-gate * Initialize STREAMS message caches so allocb() is available. 39997c478bd9Sstevel@tonic-gate * This allows us to initialize the logging framework (cmn_err(9F), 40007c478bd9Sstevel@tonic-gate * strlog(9F), etc) so we can start recording messages. 40017c478bd9Sstevel@tonic-gate */ 40027c478bd9Sstevel@tonic-gate streams_msg_init(); 40037d692464Sdp 40047c478bd9Sstevel@tonic-gate /* 40057c478bd9Sstevel@tonic-gate * Initialize the ZSD framework in Zones so modules loaded henceforth 40067c478bd9Sstevel@tonic-gate * can register their callbacks. 40077c478bd9Sstevel@tonic-gate */ 40087c478bd9Sstevel@tonic-gate zone_zsd_init(); 4009f4b3ec61Sdh 40107c478bd9Sstevel@tonic-gate log_init(); 40117c478bd9Sstevel@tonic-gate taskq_init(); 40127c478bd9Sstevel@tonic-gate 40137d692464Sdp /* 40147d692464Sdp * Warn about invalid or dangerous values of kmem_flags. 40157d692464Sdp * Always warn about unsupported values. 40167d692464Sdp */ 40177d692464Sdp if (((kmem_flags & ~(KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE | 40187d692464Sdp KMF_CONTENTS | KMF_LITE)) != 0) || 40197d692464Sdp ((kmem_flags & KMF_LITE) && kmem_flags != KMF_LITE)) 40207d692464Sdp cmn_err(CE_WARN, "kmem_flags set to unsupported value 0x%x. " 40217d692464Sdp "See the Solaris Tunable Parameters Reference Manual.", 40227d692464Sdp kmem_flags); 40237d692464Sdp 40247d692464Sdp #ifdef DEBUG 40257d692464Sdp if ((kmem_flags & KMF_DEBUG) == 0) 40267d692464Sdp cmn_err(CE_NOTE, "kmem debugging disabled."); 40277d692464Sdp #else 40287d692464Sdp /* 40297d692464Sdp * For non-debug kernels, the only "normal" flags are 0, KMF_LITE, 40307d692464Sdp * KMF_REDZONE, and KMF_CONTENTS (the last because it is only enabled 40317d692464Sdp * if KMF_AUDIT is set). We should warn the user about the performance 40327d692464Sdp * penalty of KMF_AUDIT or KMF_DEADBEEF if they are set and KMF_LITE 40337d692464Sdp * isn't set (since that disables AUDIT). 40347d692464Sdp */ 40357d692464Sdp if (!(kmem_flags & KMF_LITE) && 40367d692464Sdp (kmem_flags & (KMF_AUDIT | KMF_DEADBEEF)) != 0) 40377d692464Sdp cmn_err(CE_WARN, "High-overhead kmem debugging features " 40387d692464Sdp "enabled (kmem_flags = 0x%x). Performance degradation " 40397d692464Sdp "and large memory overhead possible. See the Solaris " 40407d692464Sdp "Tunable Parameters Reference Manual.", kmem_flags); 40417d692464Sdp #endif /* not DEBUG */ 40427d692464Sdp 40437c478bd9Sstevel@tonic-gate kmem_cache_applyall(kmem_cache_magazine_enable, NULL, TQ_SLEEP); 40447c478bd9Sstevel@tonic-gate 40457c478bd9Sstevel@tonic-gate kmem_ready = 1; 40467c478bd9Sstevel@tonic-gate 40477c478bd9Sstevel@tonic-gate /* 40487c478bd9Sstevel@tonic-gate * Initialize the platform-specific aligned/DMA memory allocator. 40497c478bd9Sstevel@tonic-gate */ 40507c478bd9Sstevel@tonic-gate ka_init(); 40517c478bd9Sstevel@tonic-gate 40527c478bd9Sstevel@tonic-gate /* 40537c478bd9Sstevel@tonic-gate * Initialize 32-bit ID cache. 40547c478bd9Sstevel@tonic-gate */ 40557c478bd9Sstevel@tonic-gate id32_init(); 4056f4b3ec61Sdh 4057f4b3ec61Sdh /* 4058f4b3ec61Sdh * Initialize the networking stack so modules loaded can 4059f4b3ec61Sdh * register their callbacks. 4060f4b3ec61Sdh */ 4061f4b3ec61Sdh netstack_init(); 40627c478bd9Sstevel@tonic-gate } 40637c478bd9Sstevel@tonic-gate 4064b5fca8f8Stomee static void 4065b5fca8f8Stomee kmem_move_init(void) 4066b5fca8f8Stomee { 4067b5fca8f8Stomee kmem_defrag_cache = kmem_cache_create("kmem_defrag_cache", 4068b5fca8f8Stomee sizeof (kmem_defrag_t), 0, NULL, NULL, NULL, NULL, 4069b5fca8f8Stomee kmem_msb_arena, KMC_NOHASH); 4070b5fca8f8Stomee kmem_move_cache = kmem_cache_create("kmem_move_cache", 4071b5fca8f8Stomee sizeof (kmem_move_t), 0, NULL, NULL, NULL, NULL, 4072b5fca8f8Stomee kmem_msb_arena, KMC_NOHASH); 4073b5fca8f8Stomee 4074b5fca8f8Stomee /* 4075b5fca8f8Stomee * kmem guarantees that move callbacks are sequential and that even 4076b5fca8f8Stomee * across multiple caches no two moves ever execute simultaneously. 4077b5fca8f8Stomee * Move callbacks are processed on a separate taskq so that client code 4078b5fca8f8Stomee * does not interfere with internal maintenance tasks. 4079b5fca8f8Stomee */ 4080b5fca8f8Stomee kmem_move_taskq = taskq_create_instance("kmem_move_taskq", 0, 1, 4081b5fca8f8Stomee minclsyspri, 100, INT_MAX, TASKQ_PREPOPULATE); 4082b5fca8f8Stomee } 4083b5fca8f8Stomee 40847c478bd9Sstevel@tonic-gate void 40857c478bd9Sstevel@tonic-gate kmem_thread_init(void) 40867c478bd9Sstevel@tonic-gate { 4087b5fca8f8Stomee kmem_move_init(); 40887c478bd9Sstevel@tonic-gate kmem_taskq = taskq_create_instance("kmem_taskq", 0, 1, minclsyspri, 40897c478bd9Sstevel@tonic-gate 300, INT_MAX, TASKQ_PREPOPULATE); 40907c478bd9Sstevel@tonic-gate } 40917c478bd9Sstevel@tonic-gate 40927c478bd9Sstevel@tonic-gate void 40937c478bd9Sstevel@tonic-gate kmem_mp_init(void) 40947c478bd9Sstevel@tonic-gate { 40957c478bd9Sstevel@tonic-gate mutex_enter(&cpu_lock); 40967c478bd9Sstevel@tonic-gate register_cpu_setup_func(kmem_cpu_setup, NULL); 40977c478bd9Sstevel@tonic-gate mutex_exit(&cpu_lock); 40987c478bd9Sstevel@tonic-gate 40997c478bd9Sstevel@tonic-gate kmem_update_timeout(NULL); 41002e0c549eSJonathan Adams 41012e0c549eSJonathan Adams taskq_mp_init(); 41027c478bd9Sstevel@tonic-gate } 4103b5fca8f8Stomee 4104b5fca8f8Stomee /* 4105b5fca8f8Stomee * Return the slab of the allocated buffer, or NULL if the buffer is not 4106b5fca8f8Stomee * allocated. This function may be called with a known slab address to determine 4107b5fca8f8Stomee * whether or not the buffer is allocated, or with a NULL slab address to obtain 4108b5fca8f8Stomee * an allocated buffer's slab. 4109b5fca8f8Stomee */ 4110b5fca8f8Stomee static kmem_slab_t * 4111b5fca8f8Stomee kmem_slab_allocated(kmem_cache_t *cp, kmem_slab_t *sp, void *buf) 4112b5fca8f8Stomee { 4113b5fca8f8Stomee kmem_bufctl_t *bcp, *bufbcp; 4114b5fca8f8Stomee 4115b5fca8f8Stomee ASSERT(MUTEX_HELD(&cp->cache_lock)); 4116b5fca8f8Stomee ASSERT(sp == NULL || KMEM_SLAB_MEMBER(sp, buf)); 4117b5fca8f8Stomee 4118b5fca8f8Stomee if (cp->cache_flags & KMF_HASH) { 4119b5fca8f8Stomee for (bcp = *KMEM_HASH(cp, buf); 4120b5fca8f8Stomee (bcp != NULL) && (bcp->bc_addr != buf); 4121b5fca8f8Stomee bcp = bcp->bc_next) { 4122b5fca8f8Stomee continue; 4123b5fca8f8Stomee } 4124b5fca8f8Stomee ASSERT(sp != NULL && bcp != NULL ? sp == bcp->bc_slab : 1); 4125b5fca8f8Stomee return (bcp == NULL ? NULL : bcp->bc_slab); 4126b5fca8f8Stomee } 4127b5fca8f8Stomee 4128b5fca8f8Stomee if (sp == NULL) { 4129b5fca8f8Stomee sp = KMEM_SLAB(cp, buf); 4130b5fca8f8Stomee } 4131b5fca8f8Stomee bufbcp = KMEM_BUFCTL(cp, buf); 4132b5fca8f8Stomee for (bcp = sp->slab_head; 4133b5fca8f8Stomee (bcp != NULL) && (bcp != bufbcp); 4134b5fca8f8Stomee bcp = bcp->bc_next) { 4135b5fca8f8Stomee continue; 4136b5fca8f8Stomee } 4137b5fca8f8Stomee return (bcp == NULL ? sp : NULL); 4138b5fca8f8Stomee } 4139b5fca8f8Stomee 4140b5fca8f8Stomee static boolean_t 4141b5fca8f8Stomee kmem_slab_is_reclaimable(kmem_cache_t *cp, kmem_slab_t *sp, int flags) 4142b5fca8f8Stomee { 4143*686031edSTom Erickson long refcnt = sp->slab_refcnt; 4144b5fca8f8Stomee 4145b5fca8f8Stomee ASSERT(cp->cache_defrag != NULL); 4146b5fca8f8Stomee 4147*686031edSTom Erickson /* 4148*686031edSTom Erickson * For code coverage we want to be able to move an object within the 4149*686031edSTom Erickson * same slab (the only partial slab) even if allocating the destination 4150*686031edSTom Erickson * buffer resulted in a completely allocated slab. 4151*686031edSTom Erickson */ 4152*686031edSTom Erickson if (flags & KMM_DEBUG) { 4153*686031edSTom Erickson return ((flags & KMM_DESPERATE) || 4154*686031edSTom Erickson ((sp->slab_flags & KMEM_SLAB_NOMOVE) == 0)); 4155*686031edSTom Erickson } 4156*686031edSTom Erickson 4157b5fca8f8Stomee /* If we're desperate, we don't care if the client said NO. */ 4158b5fca8f8Stomee if (flags & KMM_DESPERATE) { 4159b5fca8f8Stomee return (refcnt < sp->slab_chunks); /* any partial */ 4160b5fca8f8Stomee } 4161b5fca8f8Stomee 4162b5fca8f8Stomee if (sp->slab_flags & KMEM_SLAB_NOMOVE) { 4163b5fca8f8Stomee return (B_FALSE); 4164b5fca8f8Stomee } 4165b5fca8f8Stomee 4166*686031edSTom Erickson if ((refcnt == 1) || kmem_move_any_partial) { 4167b5fca8f8Stomee return (refcnt < sp->slab_chunks); 4168b5fca8f8Stomee } 4169b5fca8f8Stomee 4170b5fca8f8Stomee /* 4171b5fca8f8Stomee * The reclaim threshold is adjusted at each kmem_cache_scan() so that 4172b5fca8f8Stomee * slabs with a progressively higher percentage of used buffers can be 4173b5fca8f8Stomee * reclaimed until the cache as a whole is no longer fragmented. 4174b5fca8f8Stomee * 4175b5fca8f8Stomee * sp->slab_refcnt kmd_reclaim_numer 4176b5fca8f8Stomee * --------------- < ------------------ 4177b5fca8f8Stomee * sp->slab_chunks KMEM_VOID_FRACTION 4178b5fca8f8Stomee */ 4179b5fca8f8Stomee return ((refcnt * KMEM_VOID_FRACTION) < 4180b5fca8f8Stomee (sp->slab_chunks * cp->cache_defrag->kmd_reclaim_numer)); 4181b5fca8f8Stomee } 4182b5fca8f8Stomee 4183b5fca8f8Stomee static void * 4184b5fca8f8Stomee kmem_hunt_mag(kmem_cache_t *cp, kmem_magazine_t *m, int n, void *buf, 4185b5fca8f8Stomee void *tbuf) 4186b5fca8f8Stomee { 4187b5fca8f8Stomee int i; /* magazine round index */ 4188b5fca8f8Stomee 4189b5fca8f8Stomee for (i = 0; i < n; i++) { 4190b5fca8f8Stomee if (buf == m->mag_round[i]) { 4191b5fca8f8Stomee if (cp->cache_flags & KMF_BUFTAG) { 4192b5fca8f8Stomee (void) kmem_cache_free_debug(cp, tbuf, 4193b5fca8f8Stomee caller()); 4194b5fca8f8Stomee } 4195b5fca8f8Stomee m->mag_round[i] = tbuf; 4196b5fca8f8Stomee return (buf); 4197b5fca8f8Stomee } 4198b5fca8f8Stomee } 4199b5fca8f8Stomee 4200b5fca8f8Stomee return (NULL); 4201b5fca8f8Stomee } 4202b5fca8f8Stomee 4203b5fca8f8Stomee /* 4204b5fca8f8Stomee * Hunt the magazine layer for the given buffer. If found, the buffer is 4205b5fca8f8Stomee * removed from the magazine layer and returned, otherwise NULL is returned. 4206b5fca8f8Stomee * The state of the returned buffer is freed and constructed. 4207b5fca8f8Stomee */ 4208b5fca8f8Stomee static void * 4209b5fca8f8Stomee kmem_hunt_mags(kmem_cache_t *cp, void *buf) 4210b5fca8f8Stomee { 4211b5fca8f8Stomee kmem_cpu_cache_t *ccp; 4212b5fca8f8Stomee kmem_magazine_t *m; 4213b5fca8f8Stomee int cpu_seqid; 4214b5fca8f8Stomee int n; /* magazine rounds */ 4215b5fca8f8Stomee void *tbuf; /* temporary swap buffer */ 4216b5fca8f8Stomee 4217b5fca8f8Stomee ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4218b5fca8f8Stomee 4219b5fca8f8Stomee /* 4220b5fca8f8Stomee * Allocated a buffer to swap with the one we hope to pull out of a 4221b5fca8f8Stomee * magazine when found. 4222b5fca8f8Stomee */ 4223b5fca8f8Stomee tbuf = kmem_cache_alloc(cp, KM_NOSLEEP); 4224b5fca8f8Stomee if (tbuf == NULL) { 4225b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_hunt_alloc_fail); 4226b5fca8f8Stomee return (NULL); 4227b5fca8f8Stomee } 4228b5fca8f8Stomee if (tbuf == buf) { 4229b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_hunt_lucky); 4230b5fca8f8Stomee if (cp->cache_flags & KMF_BUFTAG) { 4231b5fca8f8Stomee (void) kmem_cache_free_debug(cp, buf, caller()); 4232b5fca8f8Stomee } 4233b5fca8f8Stomee return (buf); 4234b5fca8f8Stomee } 4235b5fca8f8Stomee 4236b5fca8f8Stomee /* Hunt the depot. */ 4237b5fca8f8Stomee mutex_enter(&cp->cache_depot_lock); 4238b5fca8f8Stomee n = cp->cache_magtype->mt_magsize; 4239b5fca8f8Stomee for (m = cp->cache_full.ml_list; m != NULL; m = m->mag_next) { 4240b5fca8f8Stomee if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) { 4241b5fca8f8Stomee mutex_exit(&cp->cache_depot_lock); 4242b5fca8f8Stomee return (buf); 4243b5fca8f8Stomee } 4244b5fca8f8Stomee } 4245b5fca8f8Stomee mutex_exit(&cp->cache_depot_lock); 4246b5fca8f8Stomee 4247b5fca8f8Stomee /* Hunt the per-CPU magazines. */ 4248b5fca8f8Stomee for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 4249b5fca8f8Stomee ccp = &cp->cache_cpu[cpu_seqid]; 4250b5fca8f8Stomee 4251b5fca8f8Stomee mutex_enter(&ccp->cc_lock); 4252b5fca8f8Stomee m = ccp->cc_loaded; 4253b5fca8f8Stomee n = ccp->cc_rounds; 4254b5fca8f8Stomee if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) { 4255b5fca8f8Stomee mutex_exit(&ccp->cc_lock); 4256b5fca8f8Stomee return (buf); 4257b5fca8f8Stomee } 4258b5fca8f8Stomee m = ccp->cc_ploaded; 4259b5fca8f8Stomee n = ccp->cc_prounds; 4260b5fca8f8Stomee if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) { 4261b5fca8f8Stomee mutex_exit(&ccp->cc_lock); 4262b5fca8f8Stomee return (buf); 4263b5fca8f8Stomee } 4264b5fca8f8Stomee mutex_exit(&ccp->cc_lock); 4265b5fca8f8Stomee } 4266b5fca8f8Stomee 4267b5fca8f8Stomee kmem_cache_free(cp, tbuf); 4268b5fca8f8Stomee return (NULL); 4269b5fca8f8Stomee } 4270b5fca8f8Stomee 4271b5fca8f8Stomee /* 4272b5fca8f8Stomee * May be called from the kmem_move_taskq, from kmem_cache_move_notify_task(), 4273b5fca8f8Stomee * or when the buffer is freed. 4274b5fca8f8Stomee */ 4275b5fca8f8Stomee static void 4276b5fca8f8Stomee kmem_slab_move_yes(kmem_cache_t *cp, kmem_slab_t *sp, void *from_buf) 4277b5fca8f8Stomee { 4278b5fca8f8Stomee ASSERT(MUTEX_HELD(&cp->cache_lock)); 4279b5fca8f8Stomee ASSERT(KMEM_SLAB_MEMBER(sp, from_buf)); 4280b5fca8f8Stomee 4281b5fca8f8Stomee if (!KMEM_SLAB_IS_PARTIAL(sp)) { 4282b5fca8f8Stomee return; 4283b5fca8f8Stomee } 4284b5fca8f8Stomee 4285b5fca8f8Stomee if (sp->slab_flags & KMEM_SLAB_NOMOVE) { 4286b5fca8f8Stomee if (KMEM_SLAB_OFFSET(sp, from_buf) == sp->slab_stuck_offset) { 4287b5fca8f8Stomee avl_remove(&cp->cache_partial_slabs, sp); 4288b5fca8f8Stomee sp->slab_flags &= ~KMEM_SLAB_NOMOVE; 4289b5fca8f8Stomee sp->slab_stuck_offset = (uint32_t)-1; 4290b5fca8f8Stomee avl_add(&cp->cache_partial_slabs, sp); 4291b5fca8f8Stomee } 4292b5fca8f8Stomee } else { 4293b5fca8f8Stomee sp->slab_later_count = 0; 4294b5fca8f8Stomee sp->slab_stuck_offset = (uint32_t)-1; 4295b5fca8f8Stomee } 4296b5fca8f8Stomee } 4297b5fca8f8Stomee 4298b5fca8f8Stomee static void 4299b5fca8f8Stomee kmem_slab_move_no(kmem_cache_t *cp, kmem_slab_t *sp, void *from_buf) 4300b5fca8f8Stomee { 4301b5fca8f8Stomee ASSERT(taskq_member(kmem_move_taskq, curthread)); 4302b5fca8f8Stomee ASSERT(MUTEX_HELD(&cp->cache_lock)); 4303b5fca8f8Stomee ASSERT(KMEM_SLAB_MEMBER(sp, from_buf)); 4304b5fca8f8Stomee 4305b5fca8f8Stomee if (!KMEM_SLAB_IS_PARTIAL(sp)) { 4306b5fca8f8Stomee return; 4307b5fca8f8Stomee } 4308b5fca8f8Stomee 4309b5fca8f8Stomee avl_remove(&cp->cache_partial_slabs, sp); 4310b5fca8f8Stomee sp->slab_later_count = 0; 4311b5fca8f8Stomee sp->slab_flags |= KMEM_SLAB_NOMOVE; 4312b5fca8f8Stomee sp->slab_stuck_offset = KMEM_SLAB_OFFSET(sp, from_buf); 4313b5fca8f8Stomee avl_add(&cp->cache_partial_slabs, sp); 4314b5fca8f8Stomee } 4315b5fca8f8Stomee 4316b5fca8f8Stomee static void kmem_move_end(kmem_cache_t *, kmem_move_t *); 4317b5fca8f8Stomee 4318b5fca8f8Stomee /* 4319b5fca8f8Stomee * The move callback takes two buffer addresses, the buffer to be moved, and a 4320b5fca8f8Stomee * newly allocated and constructed buffer selected by kmem as the destination. 4321b5fca8f8Stomee * It also takes the size of the buffer and an optional user argument specified 4322b5fca8f8Stomee * at cache creation time. kmem guarantees that the buffer to be moved has not 4323b5fca8f8Stomee * been unmapped by the virtual memory subsystem. Beyond that, it cannot 4324b5fca8f8Stomee * guarantee the present whereabouts of the buffer to be moved, so it is up to 4325b5fca8f8Stomee * the client to safely determine whether or not it is still using the buffer. 4326b5fca8f8Stomee * The client must not free either of the buffers passed to the move callback, 4327b5fca8f8Stomee * since kmem wants to free them directly to the slab layer. The client response 4328b5fca8f8Stomee * tells kmem which of the two buffers to free: 4329b5fca8f8Stomee * 4330b5fca8f8Stomee * YES kmem frees the old buffer (the move was successful) 4331b5fca8f8Stomee * NO kmem frees the new buffer, marks the slab of the old buffer 4332b5fca8f8Stomee * non-reclaimable to avoid bothering the client again 4333b5fca8f8Stomee * LATER kmem frees the new buffer, increments slab_later_count 4334b5fca8f8Stomee * DONT_KNOW kmem frees the new buffer, searches mags for the old buffer 4335b5fca8f8Stomee * DONT_NEED kmem frees both the old buffer and the new buffer 4336b5fca8f8Stomee * 4337b5fca8f8Stomee * The pending callback argument now being processed contains both of the 4338b5fca8f8Stomee * buffers (old and new) passed to the move callback function, the slab of the 4339b5fca8f8Stomee * old buffer, and flags related to the move request, such as whether or not the 4340b5fca8f8Stomee * system was desperate for memory. 4341*686031edSTom Erickson * 4342*686031edSTom Erickson * Slabs are not freed while there is a pending callback, but instead are kept 4343*686031edSTom Erickson * on a deadlist, which is drained after the last callback completes. This means 4344*686031edSTom Erickson * that slabs are safe to access until kmem_move_end(), no matter how many of 4345*686031edSTom Erickson * their buffers have been freed. Once slab_refcnt reaches zero, it stays at 4346*686031edSTom Erickson * zero for as long as the slab remains on the deadlist and until the slab is 4347*686031edSTom Erickson * freed. 4348b5fca8f8Stomee */ 4349b5fca8f8Stomee static void 4350b5fca8f8Stomee kmem_move_buffer(kmem_move_t *callback) 4351b5fca8f8Stomee { 4352b5fca8f8Stomee kmem_cbrc_t response; 4353b5fca8f8Stomee kmem_slab_t *sp = callback->kmm_from_slab; 4354b5fca8f8Stomee kmem_cache_t *cp = sp->slab_cache; 4355b5fca8f8Stomee boolean_t free_on_slab; 4356b5fca8f8Stomee 4357b5fca8f8Stomee ASSERT(taskq_member(kmem_move_taskq, curthread)); 4358b5fca8f8Stomee ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4359b5fca8f8Stomee ASSERT(KMEM_SLAB_MEMBER(sp, callback->kmm_from_buf)); 4360b5fca8f8Stomee 4361b5fca8f8Stomee /* 4362b5fca8f8Stomee * The number of allocated buffers on the slab may have changed since we 4363b5fca8f8Stomee * last checked the slab's reclaimability (when the pending move was 4364b5fca8f8Stomee * enqueued), or the client may have responded NO when asked to move 4365b5fca8f8Stomee * another buffer on the same slab. 4366b5fca8f8Stomee */ 4367b5fca8f8Stomee if (!kmem_slab_is_reclaimable(cp, sp, callback->kmm_flags)) { 4368b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_no_longer_reclaimable); 4369b5fca8f8Stomee KMEM_STAT_COND_ADD((callback->kmm_flags & KMM_NOTIFY), 4370b5fca8f8Stomee kmem_move_stats.kms_notify_no_longer_reclaimable); 4371b5fca8f8Stomee kmem_slab_free(cp, callback->kmm_to_buf); 4372b5fca8f8Stomee kmem_move_end(cp, callback); 4373b5fca8f8Stomee return; 4374b5fca8f8Stomee } 4375b5fca8f8Stomee 4376b5fca8f8Stomee /* 4377b5fca8f8Stomee * Hunting magazines is expensive, so we'll wait to do that until the 4378b5fca8f8Stomee * client responds KMEM_CBRC_DONT_KNOW. However, checking the slab layer 4379b5fca8f8Stomee * is cheap, so we might as well do that here in case we can avoid 4380b5fca8f8Stomee * bothering the client. 4381b5fca8f8Stomee */ 4382b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4383b5fca8f8Stomee free_on_slab = (kmem_slab_allocated(cp, sp, 4384b5fca8f8Stomee callback->kmm_from_buf) == NULL); 4385b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4386b5fca8f8Stomee 4387b5fca8f8Stomee if (free_on_slab) { 4388b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_hunt_found_slab); 4389b5fca8f8Stomee kmem_slab_free(cp, callback->kmm_to_buf); 4390b5fca8f8Stomee kmem_move_end(cp, callback); 4391b5fca8f8Stomee return; 4392b5fca8f8Stomee } 4393b5fca8f8Stomee 4394b5fca8f8Stomee if (cp->cache_flags & KMF_BUFTAG) { 4395b5fca8f8Stomee /* 4396b5fca8f8Stomee * Make kmem_cache_alloc_debug() apply the constructor for us. 4397b5fca8f8Stomee */ 4398b5fca8f8Stomee if (kmem_cache_alloc_debug(cp, callback->kmm_to_buf, 4399b5fca8f8Stomee KM_NOSLEEP, 1, caller()) != 0) { 4400b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_alloc_fail); 4401b5fca8f8Stomee kmem_move_end(cp, callback); 4402b5fca8f8Stomee return; 4403b5fca8f8Stomee } 4404b5fca8f8Stomee } else if (cp->cache_constructor != NULL && 4405b5fca8f8Stomee cp->cache_constructor(callback->kmm_to_buf, cp->cache_private, 4406b5fca8f8Stomee KM_NOSLEEP) != 0) { 4407b5fca8f8Stomee atomic_add_64(&cp->cache_alloc_fail, 1); 4408b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_constructor_fail); 4409b5fca8f8Stomee kmem_slab_free(cp, callback->kmm_to_buf); 4410b5fca8f8Stomee kmem_move_end(cp, callback); 4411b5fca8f8Stomee return; 4412b5fca8f8Stomee } 4413b5fca8f8Stomee 4414b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_callbacks); 4415b5fca8f8Stomee KMEM_STAT_COND_ADD((callback->kmm_flags & KMM_NOTIFY), 4416b5fca8f8Stomee kmem_move_stats.kms_notify_callbacks); 4417b5fca8f8Stomee cp->cache_defrag->kmd_callbacks++; 4418b5fca8f8Stomee cp->cache_defrag->kmd_thread = curthread; 4419b5fca8f8Stomee cp->cache_defrag->kmd_from_buf = callback->kmm_from_buf; 4420b5fca8f8Stomee cp->cache_defrag->kmd_to_buf = callback->kmm_to_buf; 4421b5fca8f8Stomee DTRACE_PROBE2(kmem__move__start, kmem_cache_t *, cp, kmem_move_t *, 4422b5fca8f8Stomee callback); 4423b5fca8f8Stomee 4424b5fca8f8Stomee response = cp->cache_move(callback->kmm_from_buf, 4425b5fca8f8Stomee callback->kmm_to_buf, cp->cache_bufsize, cp->cache_private); 4426b5fca8f8Stomee 4427b5fca8f8Stomee DTRACE_PROBE3(kmem__move__end, kmem_cache_t *, cp, kmem_move_t *, 4428b5fca8f8Stomee callback, kmem_cbrc_t, response); 4429b5fca8f8Stomee cp->cache_defrag->kmd_thread = NULL; 4430b5fca8f8Stomee cp->cache_defrag->kmd_from_buf = NULL; 4431b5fca8f8Stomee cp->cache_defrag->kmd_to_buf = NULL; 4432b5fca8f8Stomee 4433b5fca8f8Stomee if (response == KMEM_CBRC_YES) { 4434b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_yes); 4435b5fca8f8Stomee cp->cache_defrag->kmd_yes++; 4436b5fca8f8Stomee kmem_slab_free_constructed(cp, callback->kmm_from_buf, B_FALSE); 4437*686031edSTom Erickson /* slab safe to access until kmem_move_end() */ 4438*686031edSTom Erickson if (sp->slab_refcnt == 0) 4439*686031edSTom Erickson cp->cache_defrag->kmd_slabs_freed++; 4440b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4441b5fca8f8Stomee kmem_slab_move_yes(cp, sp, callback->kmm_from_buf); 4442b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4443b5fca8f8Stomee kmem_move_end(cp, callback); 4444b5fca8f8Stomee return; 4445b5fca8f8Stomee } 4446b5fca8f8Stomee 4447b5fca8f8Stomee switch (response) { 4448b5fca8f8Stomee case KMEM_CBRC_NO: 4449b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_no); 4450b5fca8f8Stomee cp->cache_defrag->kmd_no++; 4451b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4452b5fca8f8Stomee kmem_slab_move_no(cp, sp, callback->kmm_from_buf); 4453b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4454b5fca8f8Stomee break; 4455b5fca8f8Stomee case KMEM_CBRC_LATER: 4456b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_later); 4457b5fca8f8Stomee cp->cache_defrag->kmd_later++; 4458b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4459b5fca8f8Stomee if (!KMEM_SLAB_IS_PARTIAL(sp)) { 4460b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4461b5fca8f8Stomee break; 4462b5fca8f8Stomee } 4463b5fca8f8Stomee 4464b5fca8f8Stomee if (++sp->slab_later_count >= KMEM_DISBELIEF) { 4465b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_disbelief); 4466b5fca8f8Stomee kmem_slab_move_no(cp, sp, callback->kmm_from_buf); 4467b5fca8f8Stomee } else if (!(sp->slab_flags & KMEM_SLAB_NOMOVE)) { 4468b5fca8f8Stomee sp->slab_stuck_offset = KMEM_SLAB_OFFSET(sp, 4469b5fca8f8Stomee callback->kmm_from_buf); 4470b5fca8f8Stomee } 4471b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4472b5fca8f8Stomee break; 4473b5fca8f8Stomee case KMEM_CBRC_DONT_NEED: 4474b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_dont_need); 4475b5fca8f8Stomee cp->cache_defrag->kmd_dont_need++; 4476b5fca8f8Stomee kmem_slab_free_constructed(cp, callback->kmm_from_buf, B_FALSE); 4477*686031edSTom Erickson if (sp->slab_refcnt == 0) 4478*686031edSTom Erickson cp->cache_defrag->kmd_slabs_freed++; 4479b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4480b5fca8f8Stomee kmem_slab_move_yes(cp, sp, callback->kmm_from_buf); 4481b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4482b5fca8f8Stomee break; 4483b5fca8f8Stomee case KMEM_CBRC_DONT_KNOW: 4484b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_dont_know); 4485b5fca8f8Stomee cp->cache_defrag->kmd_dont_know++; 4486b5fca8f8Stomee if (kmem_hunt_mags(cp, callback->kmm_from_buf) != NULL) { 4487b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_hunt_found_mag); 4488b5fca8f8Stomee cp->cache_defrag->kmd_hunt_found++; 4489b5fca8f8Stomee kmem_slab_free_constructed(cp, callback->kmm_from_buf, 4490b5fca8f8Stomee B_TRUE); 4491*686031edSTom Erickson if (sp->slab_refcnt == 0) 4492*686031edSTom Erickson cp->cache_defrag->kmd_slabs_freed++; 4493b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4494b5fca8f8Stomee kmem_slab_move_yes(cp, sp, callback->kmm_from_buf); 4495b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4496b5fca8f8Stomee } 4497b5fca8f8Stomee break; 4498b5fca8f8Stomee default: 4499b5fca8f8Stomee panic("'%s' (%p) unexpected move callback response %d\n", 4500b5fca8f8Stomee cp->cache_name, (void *)cp, response); 4501b5fca8f8Stomee } 4502b5fca8f8Stomee 4503b5fca8f8Stomee kmem_slab_free_constructed(cp, callback->kmm_to_buf, B_FALSE); 4504b5fca8f8Stomee kmem_move_end(cp, callback); 4505b5fca8f8Stomee } 4506b5fca8f8Stomee 4507b5fca8f8Stomee /* Return B_FALSE if there is insufficient memory for the move request. */ 4508b5fca8f8Stomee static boolean_t 4509b5fca8f8Stomee kmem_move_begin(kmem_cache_t *cp, kmem_slab_t *sp, void *buf, int flags) 4510b5fca8f8Stomee { 4511b5fca8f8Stomee void *to_buf; 4512b5fca8f8Stomee avl_index_t index; 4513b5fca8f8Stomee kmem_move_t *callback, *pending; 4514*686031edSTom Erickson ulong_t n; 4515b5fca8f8Stomee 4516b5fca8f8Stomee ASSERT(taskq_member(kmem_taskq, curthread)); 4517b5fca8f8Stomee ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4518b5fca8f8Stomee ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING); 4519b5fca8f8Stomee 4520b5fca8f8Stomee callback = kmem_cache_alloc(kmem_move_cache, KM_NOSLEEP); 4521b5fca8f8Stomee if (callback == NULL) { 4522b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_callback_alloc_fail); 4523b5fca8f8Stomee return (B_FALSE); 4524b5fca8f8Stomee } 4525b5fca8f8Stomee 4526b5fca8f8Stomee callback->kmm_from_slab = sp; 4527b5fca8f8Stomee callback->kmm_from_buf = buf; 4528b5fca8f8Stomee callback->kmm_flags = flags; 4529b5fca8f8Stomee 4530b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4531b5fca8f8Stomee 4532*686031edSTom Erickson n = avl_numnodes(&cp->cache_partial_slabs); 4533*686031edSTom Erickson if ((n == 0) || ((n == 1) && !(flags & KMM_DEBUG))) { 4534b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4535b5fca8f8Stomee kmem_cache_free(kmem_move_cache, callback); 4536b5fca8f8Stomee return (B_TRUE); /* there is no need for the move request */ 4537b5fca8f8Stomee } 4538b5fca8f8Stomee 4539b5fca8f8Stomee pending = avl_find(&cp->cache_defrag->kmd_moves_pending, buf, &index); 4540b5fca8f8Stomee if (pending != NULL) { 4541b5fca8f8Stomee /* 4542b5fca8f8Stomee * If the move is already pending and we're desperate now, 4543b5fca8f8Stomee * update the move flags. 4544b5fca8f8Stomee */ 4545b5fca8f8Stomee if (flags & KMM_DESPERATE) { 4546b5fca8f8Stomee pending->kmm_flags |= KMM_DESPERATE; 4547b5fca8f8Stomee } 4548b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4549b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_already_pending); 4550b5fca8f8Stomee kmem_cache_free(kmem_move_cache, callback); 4551b5fca8f8Stomee return (B_TRUE); 4552b5fca8f8Stomee } 4553b5fca8f8Stomee 4554b5fca8f8Stomee to_buf = kmem_slab_alloc_impl(cp, avl_first(&cp->cache_partial_slabs)); 4555b5fca8f8Stomee callback->kmm_to_buf = to_buf; 4556b5fca8f8Stomee avl_insert(&cp->cache_defrag->kmd_moves_pending, callback, index); 4557b5fca8f8Stomee 4558b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4559b5fca8f8Stomee 4560b5fca8f8Stomee if (!taskq_dispatch(kmem_move_taskq, (task_func_t *)kmem_move_buffer, 4561b5fca8f8Stomee callback, TQ_NOSLEEP)) { 456225e2c9cfStomee KMEM_STAT_ADD(kmem_move_stats.kms_callback_taskq_fail); 4563b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4564b5fca8f8Stomee avl_remove(&cp->cache_defrag->kmd_moves_pending, callback); 4565b5fca8f8Stomee mutex_exit(&cp->cache_lock); 456625e2c9cfStomee kmem_slab_free(cp, to_buf); 4567b5fca8f8Stomee kmem_cache_free(kmem_move_cache, callback); 4568b5fca8f8Stomee return (B_FALSE); 4569b5fca8f8Stomee } 4570b5fca8f8Stomee 4571b5fca8f8Stomee return (B_TRUE); 4572b5fca8f8Stomee } 4573b5fca8f8Stomee 4574b5fca8f8Stomee static void 4575b5fca8f8Stomee kmem_move_end(kmem_cache_t *cp, kmem_move_t *callback) 4576b5fca8f8Stomee { 4577b5fca8f8Stomee avl_index_t index; 4578b5fca8f8Stomee 4579b5fca8f8Stomee ASSERT(cp->cache_defrag != NULL); 4580b5fca8f8Stomee ASSERT(taskq_member(kmem_move_taskq, curthread)); 4581b5fca8f8Stomee ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4582b5fca8f8Stomee 4583b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4584b5fca8f8Stomee VERIFY(avl_find(&cp->cache_defrag->kmd_moves_pending, 4585b5fca8f8Stomee callback->kmm_from_buf, &index) != NULL); 4586b5fca8f8Stomee avl_remove(&cp->cache_defrag->kmd_moves_pending, callback); 4587b5fca8f8Stomee if (avl_is_empty(&cp->cache_defrag->kmd_moves_pending)) { 4588b5fca8f8Stomee list_t *deadlist = &cp->cache_defrag->kmd_deadlist; 4589b5fca8f8Stomee kmem_slab_t *sp; 4590b5fca8f8Stomee 4591b5fca8f8Stomee /* 4592b5fca8f8Stomee * The last pending move completed. Release all slabs from the 4593b5fca8f8Stomee * front of the dead list except for any slab at the tail that 4594b5fca8f8Stomee * needs to be released from the context of kmem_move_buffers(). 4595b5fca8f8Stomee * kmem deferred unmapping the buffers on these slabs in order 4596b5fca8f8Stomee * to guarantee that buffers passed to the move callback have 4597b5fca8f8Stomee * been touched only by kmem or by the client itself. 4598b5fca8f8Stomee */ 4599b5fca8f8Stomee while ((sp = list_remove_head(deadlist)) != NULL) { 4600b5fca8f8Stomee if (sp->slab_flags & KMEM_SLAB_MOVE_PENDING) { 4601b5fca8f8Stomee list_insert_tail(deadlist, sp); 4602b5fca8f8Stomee break; 4603b5fca8f8Stomee } 4604b5fca8f8Stomee cp->cache_defrag->kmd_deadcount--; 4605b5fca8f8Stomee cp->cache_slab_destroy++; 4606b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4607b5fca8f8Stomee kmem_slab_destroy(cp, sp); 4608b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_dead_slabs_freed); 4609b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4610b5fca8f8Stomee } 4611b5fca8f8Stomee } 4612b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4613b5fca8f8Stomee kmem_cache_free(kmem_move_cache, callback); 4614b5fca8f8Stomee } 4615b5fca8f8Stomee 4616b5fca8f8Stomee /* 4617b5fca8f8Stomee * Move buffers from least used slabs first by scanning backwards from the end 4618b5fca8f8Stomee * of the partial slab list. Scan at most max_scan candidate slabs and move 4619b5fca8f8Stomee * buffers from at most max_slabs slabs (0 for all partial slabs in both cases). 4620b5fca8f8Stomee * If desperate to reclaim memory, move buffers from any partial slab, otherwise 4621b5fca8f8Stomee * skip slabs with a ratio of allocated buffers at or above the current 4622b5fca8f8Stomee * threshold. Return the number of unskipped slabs (at most max_slabs, -1 if the 4623b5fca8f8Stomee * scan is aborted) so that the caller can adjust the reclaimability threshold 4624b5fca8f8Stomee * depending on how many reclaimable slabs it finds. 4625b5fca8f8Stomee * 4626b5fca8f8Stomee * kmem_move_buffers() drops and reacquires cache_lock every time it issues a 4627b5fca8f8Stomee * move request, since it is not valid for kmem_move_begin() to call 4628b5fca8f8Stomee * kmem_cache_alloc() or taskq_dispatch() with cache_lock held. 4629b5fca8f8Stomee */ 4630b5fca8f8Stomee static int 4631b5fca8f8Stomee kmem_move_buffers(kmem_cache_t *cp, size_t max_scan, size_t max_slabs, 4632b5fca8f8Stomee int flags) 4633b5fca8f8Stomee { 4634b5fca8f8Stomee kmem_slab_t *sp; 4635b5fca8f8Stomee void *buf; 4636b5fca8f8Stomee int i, j; /* slab index, buffer index */ 4637b5fca8f8Stomee int s; /* reclaimable slabs */ 4638b5fca8f8Stomee int b; /* allocated (movable) buffers on reclaimable slab */ 4639b5fca8f8Stomee boolean_t success; 4640b5fca8f8Stomee int refcnt; 4641b5fca8f8Stomee int nomove; 4642b5fca8f8Stomee 4643b5fca8f8Stomee ASSERT(taskq_member(kmem_taskq, curthread)); 4644b5fca8f8Stomee ASSERT(MUTEX_HELD(&cp->cache_lock)); 4645b5fca8f8Stomee ASSERT(kmem_move_cache != NULL); 4646b5fca8f8Stomee ASSERT(cp->cache_move != NULL && cp->cache_defrag != NULL); 4647*686031edSTom Erickson ASSERT((flags & KMM_DEBUG) ? !avl_is_empty(&cp->cache_partial_slabs) : 4648*686031edSTom Erickson avl_numnodes(&cp->cache_partial_slabs) > 1); 4649b5fca8f8Stomee 4650b5fca8f8Stomee if (kmem_move_blocked) { 4651b5fca8f8Stomee return (0); 4652b5fca8f8Stomee } 4653b5fca8f8Stomee 4654b5fca8f8Stomee if (kmem_move_fulltilt) { 4655b5fca8f8Stomee flags |= KMM_DESPERATE; 4656b5fca8f8Stomee } 4657b5fca8f8Stomee 4658b5fca8f8Stomee if (max_scan == 0 || (flags & KMM_DESPERATE)) { 4659b5fca8f8Stomee /* 4660b5fca8f8Stomee * Scan as many slabs as needed to find the desired number of 4661b5fca8f8Stomee * candidate slabs. 4662b5fca8f8Stomee */ 4663b5fca8f8Stomee max_scan = (size_t)-1; 4664b5fca8f8Stomee } 4665b5fca8f8Stomee 4666b5fca8f8Stomee if (max_slabs == 0 || (flags & KMM_DESPERATE)) { 4667b5fca8f8Stomee /* Find as many candidate slabs as possible. */ 4668b5fca8f8Stomee max_slabs = (size_t)-1; 4669b5fca8f8Stomee } 4670b5fca8f8Stomee 4671b5fca8f8Stomee sp = avl_last(&cp->cache_partial_slabs); 4672*686031edSTom Erickson ASSERT(KMEM_SLAB_IS_PARTIAL(sp)); 4673*686031edSTom Erickson for (i = 0, s = 0; (i < max_scan) && (s < max_slabs) && (sp != NULL) && 4674*686031edSTom Erickson ((sp != avl_first(&cp->cache_partial_slabs)) || 4675*686031edSTom Erickson (flags & KMM_DEBUG)); 4676b5fca8f8Stomee sp = AVL_PREV(&cp->cache_partial_slabs, sp), i++) { 4677b5fca8f8Stomee 4678b5fca8f8Stomee if (!kmem_slab_is_reclaimable(cp, sp, flags)) { 4679b5fca8f8Stomee continue; 4680b5fca8f8Stomee } 4681b5fca8f8Stomee s++; 4682b5fca8f8Stomee 4683b5fca8f8Stomee /* Look for allocated buffers to move. */ 4684b5fca8f8Stomee for (j = 0, b = 0, buf = sp->slab_base; 4685b5fca8f8Stomee (j < sp->slab_chunks) && (b < sp->slab_refcnt); 4686b5fca8f8Stomee buf = (((char *)buf) + cp->cache_chunksize), j++) { 4687b5fca8f8Stomee 4688b5fca8f8Stomee if (kmem_slab_allocated(cp, sp, buf) == NULL) { 4689b5fca8f8Stomee continue; 4690b5fca8f8Stomee } 4691b5fca8f8Stomee 4692b5fca8f8Stomee b++; 4693b5fca8f8Stomee 4694b5fca8f8Stomee /* 4695b5fca8f8Stomee * Prevent the slab from being destroyed while we drop 4696b5fca8f8Stomee * cache_lock and while the pending move is not yet 4697b5fca8f8Stomee * registered. Flag the pending move while 4698b5fca8f8Stomee * kmd_moves_pending may still be empty, since we can't 4699b5fca8f8Stomee * yet rely on a non-zero pending move count to prevent 4700b5fca8f8Stomee * the slab from being destroyed. 4701b5fca8f8Stomee */ 4702b5fca8f8Stomee ASSERT(!(sp->slab_flags & KMEM_SLAB_MOVE_PENDING)); 4703b5fca8f8Stomee sp->slab_flags |= KMEM_SLAB_MOVE_PENDING; 4704b5fca8f8Stomee /* 4705b5fca8f8Stomee * Recheck refcnt and nomove after reacquiring the lock, 4706b5fca8f8Stomee * since these control the order of partial slabs, and 4707b5fca8f8Stomee * we want to know if we can pick up the scan where we 4708b5fca8f8Stomee * left off. 4709b5fca8f8Stomee */ 4710b5fca8f8Stomee refcnt = sp->slab_refcnt; 4711b5fca8f8Stomee nomove = (sp->slab_flags & KMEM_SLAB_NOMOVE); 4712b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4713b5fca8f8Stomee 4714b5fca8f8Stomee success = kmem_move_begin(cp, sp, buf, flags); 4715b5fca8f8Stomee 4716b5fca8f8Stomee /* 4717b5fca8f8Stomee * Now, before the lock is reacquired, kmem could 4718b5fca8f8Stomee * process all pending move requests and purge the 4719b5fca8f8Stomee * deadlist, so that upon reacquiring the lock, sp has 4720*686031edSTom Erickson * been remapped. Or, the client may free all the 4721*686031edSTom Erickson * objects on the slab while the pending moves are still 4722*686031edSTom Erickson * on the taskq. Therefore, the KMEM_SLAB_MOVE_PENDING 4723b5fca8f8Stomee * flag causes the slab to be put at the end of the 4724*686031edSTom Erickson * deadlist and prevents it from being destroyed, since 4725*686031edSTom Erickson * we plan to destroy it here after reacquiring the 4726*686031edSTom Erickson * lock. 4727b5fca8f8Stomee */ 4728b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4729b5fca8f8Stomee ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING); 4730b5fca8f8Stomee sp->slab_flags &= ~KMEM_SLAB_MOVE_PENDING; 4731b5fca8f8Stomee 4732b5fca8f8Stomee if (sp->slab_refcnt == 0) { 4733b5fca8f8Stomee list_t *deadlist = 4734b5fca8f8Stomee &cp->cache_defrag->kmd_deadlist; 4735*686031edSTom Erickson list_remove(deadlist, sp); 4736b5fca8f8Stomee 4737*686031edSTom Erickson if (!avl_is_empty( 4738*686031edSTom Erickson &cp->cache_defrag->kmd_moves_pending)) { 4739*686031edSTom Erickson /* 4740*686031edSTom Erickson * A pending move makes it unsafe to 4741*686031edSTom Erickson * destroy the slab, because even though 4742*686031edSTom Erickson * the move is no longer needed, the 4743*686031edSTom Erickson * context where that is determined 4744*686031edSTom Erickson * requires the slab to exist. 4745*686031edSTom Erickson * Fortunately, a pending move also 4746*686031edSTom Erickson * means we don't need to destroy the 4747*686031edSTom Erickson * slab here, since it will get 4748*686031edSTom Erickson * destroyed along with any other slabs 4749*686031edSTom Erickson * on the deadlist after the last 4750*686031edSTom Erickson * pending move completes. 4751*686031edSTom Erickson */ 4752*686031edSTom Erickson list_insert_head(deadlist, sp); 4753*686031edSTom Erickson KMEM_STAT_ADD(kmem_move_stats. 4754*686031edSTom Erickson kms_endscan_slab_dead); 4755*686031edSTom Erickson return (-1); 4756*686031edSTom Erickson } 4757b5fca8f8Stomee 4758*686031edSTom Erickson /* 4759*686031edSTom Erickson * Destroy the slab now if it was completely 4760*686031edSTom Erickson * freed while we dropped cache_lock and there 4761*686031edSTom Erickson * are no pending moves. Since slab_refcnt 4762*686031edSTom Erickson * cannot change once it reaches zero, no new 4763*686031edSTom Erickson * pending moves from that slab are possible. 4764*686031edSTom Erickson */ 4765b5fca8f8Stomee cp->cache_defrag->kmd_deadcount--; 4766b5fca8f8Stomee cp->cache_slab_destroy++; 4767b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4768b5fca8f8Stomee kmem_slab_destroy(cp, sp); 4769b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats. 4770b5fca8f8Stomee kms_dead_slabs_freed); 4771b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats. 4772b5fca8f8Stomee kms_endscan_slab_destroyed); 4773b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4774b5fca8f8Stomee /* 4775b5fca8f8Stomee * Since we can't pick up the scan where we left 4776b5fca8f8Stomee * off, abort the scan and say nothing about the 4777b5fca8f8Stomee * number of reclaimable slabs. 4778b5fca8f8Stomee */ 4779b5fca8f8Stomee return (-1); 4780b5fca8f8Stomee } 4781b5fca8f8Stomee 4782b5fca8f8Stomee if (!success) { 4783b5fca8f8Stomee /* 4784b5fca8f8Stomee * Abort the scan if there is not enough memory 4785b5fca8f8Stomee * for the request and say nothing about the 4786b5fca8f8Stomee * number of reclaimable slabs. 4787b5fca8f8Stomee */ 4788*686031edSTom Erickson KMEM_STAT_COND_ADD(s < max_slabs, 4789b5fca8f8Stomee kmem_move_stats.kms_endscan_nomem); 4790b5fca8f8Stomee return (-1); 4791b5fca8f8Stomee } 4792b5fca8f8Stomee 4793b5fca8f8Stomee /* 4794b5fca8f8Stomee * The slab's position changed while the lock was 4795b5fca8f8Stomee * dropped, so we don't know where we are in the 4796b5fca8f8Stomee * sequence any more. 4797b5fca8f8Stomee */ 4798b5fca8f8Stomee if (sp->slab_refcnt != refcnt) { 4799*686031edSTom Erickson /* 4800*686031edSTom Erickson * If this is a KMM_DEBUG move, the slab_refcnt 4801*686031edSTom Erickson * may have changed because we allocated a 4802*686031edSTom Erickson * destination buffer on the same slab. In that 4803*686031edSTom Erickson * case, we're not interested in counting it. 4804*686031edSTom Erickson */ 4805*686031edSTom Erickson KMEM_STAT_COND_ADD(!(flags & KMM_DEBUG) && 4806*686031edSTom Erickson (s < max_slabs), 4807b5fca8f8Stomee kmem_move_stats.kms_endscan_refcnt_changed); 4808b5fca8f8Stomee return (-1); 4809b5fca8f8Stomee } 4810b5fca8f8Stomee if ((sp->slab_flags & KMEM_SLAB_NOMOVE) != nomove) { 4811*686031edSTom Erickson KMEM_STAT_COND_ADD(s < max_slabs, 4812b5fca8f8Stomee kmem_move_stats.kms_endscan_nomove_changed); 4813b5fca8f8Stomee return (-1); 4814b5fca8f8Stomee } 4815b5fca8f8Stomee 4816b5fca8f8Stomee /* 4817b5fca8f8Stomee * Generating a move request allocates a destination 4818*686031edSTom Erickson * buffer from the slab layer, bumping the first partial 4819*686031edSTom Erickson * slab if it is completely allocated. If the current 4820*686031edSTom Erickson * slab becomes the first partial slab as a result, we 4821*686031edSTom Erickson * can't continue to scan backwards. 4822*686031edSTom Erickson * 4823*686031edSTom Erickson * If this is a KMM_DEBUG move and we allocated the 4824*686031edSTom Erickson * destination buffer from the last partial slab, then 4825*686031edSTom Erickson * the buffer we're moving is on the same slab and our 4826*686031edSTom Erickson * slab_refcnt has changed, causing us to return before 4827*686031edSTom Erickson * reaching here if there are no partial slabs left. 4828b5fca8f8Stomee */ 4829b5fca8f8Stomee ASSERT(!avl_is_empty(&cp->cache_partial_slabs)); 4830b5fca8f8Stomee if (sp == avl_first(&cp->cache_partial_slabs)) { 4831*686031edSTom Erickson /* 4832*686031edSTom Erickson * We're not interested in a second KMM_DEBUG 4833*686031edSTom Erickson * move. 4834*686031edSTom Erickson */ 4835b5fca8f8Stomee goto end_scan; 4836b5fca8f8Stomee } 4837b5fca8f8Stomee } 4838b5fca8f8Stomee } 4839b5fca8f8Stomee end_scan: 4840b5fca8f8Stomee 4841*686031edSTom Erickson KMEM_STAT_COND_ADD(!(flags & KMM_DEBUG) && 4842*686031edSTom Erickson (s < max_slabs) && 4843*686031edSTom Erickson (sp == avl_first(&cp->cache_partial_slabs)), 4844b5fca8f8Stomee kmem_move_stats.kms_endscan_freelist); 4845b5fca8f8Stomee 4846b5fca8f8Stomee return (s); 4847b5fca8f8Stomee } 4848b5fca8f8Stomee 4849b5fca8f8Stomee typedef struct kmem_move_notify_args { 4850b5fca8f8Stomee kmem_cache_t *kmna_cache; 4851b5fca8f8Stomee void *kmna_buf; 4852b5fca8f8Stomee } kmem_move_notify_args_t; 4853b5fca8f8Stomee 4854b5fca8f8Stomee static void 4855b5fca8f8Stomee kmem_cache_move_notify_task(void *arg) 4856b5fca8f8Stomee { 4857b5fca8f8Stomee kmem_move_notify_args_t *args = arg; 4858b5fca8f8Stomee kmem_cache_t *cp = args->kmna_cache; 4859b5fca8f8Stomee void *buf = args->kmna_buf; 4860b5fca8f8Stomee kmem_slab_t *sp; 4861b5fca8f8Stomee 4862b5fca8f8Stomee ASSERT(taskq_member(kmem_taskq, curthread)); 4863b5fca8f8Stomee ASSERT(list_link_active(&cp->cache_link)); 4864b5fca8f8Stomee 4865b5fca8f8Stomee kmem_free(args, sizeof (kmem_move_notify_args_t)); 4866b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4867b5fca8f8Stomee sp = kmem_slab_allocated(cp, NULL, buf); 4868b5fca8f8Stomee 4869b5fca8f8Stomee /* Ignore the notification if the buffer is no longer allocated. */ 4870b5fca8f8Stomee if (sp == NULL) { 4871b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4872b5fca8f8Stomee return; 4873b5fca8f8Stomee } 4874b5fca8f8Stomee 4875b5fca8f8Stomee /* Ignore the notification if there's no reason to move the buffer. */ 4876b5fca8f8Stomee if (avl_numnodes(&cp->cache_partial_slabs) > 1) { 4877b5fca8f8Stomee /* 4878b5fca8f8Stomee * So far the notification is not ignored. Ignore the 4879b5fca8f8Stomee * notification if the slab is not marked by an earlier refusal 4880b5fca8f8Stomee * to move a buffer. 4881b5fca8f8Stomee */ 4882b5fca8f8Stomee if (!(sp->slab_flags & KMEM_SLAB_NOMOVE) && 4883b5fca8f8Stomee (sp->slab_later_count == 0)) { 4884b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4885b5fca8f8Stomee return; 4886b5fca8f8Stomee } 4887b5fca8f8Stomee 4888b5fca8f8Stomee kmem_slab_move_yes(cp, sp, buf); 4889b5fca8f8Stomee ASSERT(!(sp->slab_flags & KMEM_SLAB_MOVE_PENDING)); 4890b5fca8f8Stomee sp->slab_flags |= KMEM_SLAB_MOVE_PENDING; 4891b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4892b5fca8f8Stomee /* see kmem_move_buffers() about dropping the lock */ 4893b5fca8f8Stomee (void) kmem_move_begin(cp, sp, buf, KMM_NOTIFY); 4894b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4895b5fca8f8Stomee ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING); 4896b5fca8f8Stomee sp->slab_flags &= ~KMEM_SLAB_MOVE_PENDING; 4897b5fca8f8Stomee if (sp->slab_refcnt == 0) { 4898b5fca8f8Stomee list_t *deadlist = &cp->cache_defrag->kmd_deadlist; 4899*686031edSTom Erickson list_remove(deadlist, sp); 4900b5fca8f8Stomee 4901*686031edSTom Erickson if (!avl_is_empty( 4902*686031edSTom Erickson &cp->cache_defrag->kmd_moves_pending)) { 4903*686031edSTom Erickson list_insert_head(deadlist, sp); 4904*686031edSTom Erickson mutex_exit(&cp->cache_lock); 4905*686031edSTom Erickson KMEM_STAT_ADD(kmem_move_stats. 4906*686031edSTom Erickson kms_notify_slab_dead); 4907*686031edSTom Erickson return; 4908*686031edSTom Erickson } 4909b5fca8f8Stomee 4910b5fca8f8Stomee cp->cache_defrag->kmd_deadcount--; 4911b5fca8f8Stomee cp->cache_slab_destroy++; 4912b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4913b5fca8f8Stomee kmem_slab_destroy(cp, sp); 4914b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_dead_slabs_freed); 4915*686031edSTom Erickson KMEM_STAT_ADD(kmem_move_stats. 4916*686031edSTom Erickson kms_notify_slab_destroyed); 4917b5fca8f8Stomee return; 4918b5fca8f8Stomee } 4919b5fca8f8Stomee } else { 4920b5fca8f8Stomee kmem_slab_move_yes(cp, sp, buf); 4921b5fca8f8Stomee } 4922b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4923b5fca8f8Stomee } 4924b5fca8f8Stomee 4925b5fca8f8Stomee void 4926b5fca8f8Stomee kmem_cache_move_notify(kmem_cache_t *cp, void *buf) 4927b5fca8f8Stomee { 4928b5fca8f8Stomee kmem_move_notify_args_t *args; 4929b5fca8f8Stomee 4930b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_notify); 4931b5fca8f8Stomee args = kmem_alloc(sizeof (kmem_move_notify_args_t), KM_NOSLEEP); 4932b5fca8f8Stomee if (args != NULL) { 4933b5fca8f8Stomee args->kmna_cache = cp; 4934b5fca8f8Stomee args->kmna_buf = buf; 4935eb697d4eStomee if (!taskq_dispatch(kmem_taskq, 4936b5fca8f8Stomee (task_func_t *)kmem_cache_move_notify_task, args, 4937eb697d4eStomee TQ_NOSLEEP)) 4938eb697d4eStomee kmem_free(args, sizeof (kmem_move_notify_args_t)); 4939b5fca8f8Stomee } 4940b5fca8f8Stomee } 4941b5fca8f8Stomee 4942b5fca8f8Stomee static void 4943b5fca8f8Stomee kmem_cache_defrag(kmem_cache_t *cp) 4944b5fca8f8Stomee { 4945b5fca8f8Stomee size_t n; 4946b5fca8f8Stomee 4947b5fca8f8Stomee ASSERT(cp->cache_defrag != NULL); 4948b5fca8f8Stomee 4949b5fca8f8Stomee mutex_enter(&cp->cache_lock); 4950b5fca8f8Stomee n = avl_numnodes(&cp->cache_partial_slabs); 4951b5fca8f8Stomee if (n > 1) { 4952b5fca8f8Stomee /* kmem_move_buffers() drops and reacquires cache_lock */ 4953b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_defrags); 4954*686031edSTom Erickson cp->cache_defrag->kmd_defrags++; 4955*686031edSTom Erickson (void) kmem_move_buffers(cp, n, 0, KMM_DESPERATE); 4956b5fca8f8Stomee } 4957b5fca8f8Stomee mutex_exit(&cp->cache_lock); 4958b5fca8f8Stomee } 4959b5fca8f8Stomee 4960b5fca8f8Stomee /* Is this cache above the fragmentation threshold? */ 4961b5fca8f8Stomee static boolean_t 4962b5fca8f8Stomee kmem_cache_frag_threshold(kmem_cache_t *cp, uint64_t nfree) 4963b5fca8f8Stomee { 4964b5fca8f8Stomee /* 4965b5fca8f8Stomee * nfree kmem_frag_numer 4966b5fca8f8Stomee * ------------------ > --------------- 4967b5fca8f8Stomee * cp->cache_buftotal kmem_frag_denom 4968b5fca8f8Stomee */ 4969b5fca8f8Stomee return ((nfree * kmem_frag_denom) > 4970b5fca8f8Stomee (cp->cache_buftotal * kmem_frag_numer)); 4971b5fca8f8Stomee } 4972b5fca8f8Stomee 4973b5fca8f8Stomee static boolean_t 4974b5fca8f8Stomee kmem_cache_is_fragmented(kmem_cache_t *cp, boolean_t *doreap) 4975b5fca8f8Stomee { 4976b5fca8f8Stomee boolean_t fragmented; 4977b5fca8f8Stomee uint64_t nfree; 4978b5fca8f8Stomee 4979b5fca8f8Stomee ASSERT(MUTEX_HELD(&cp->cache_lock)); 4980b5fca8f8Stomee *doreap = B_FALSE; 4981b5fca8f8Stomee 4982*686031edSTom Erickson if (kmem_move_fulltilt) { 4983*686031edSTom Erickson if (avl_numnodes(&cp->cache_partial_slabs) > 1) { 4984*686031edSTom Erickson return (B_TRUE); 4985*686031edSTom Erickson } 4986*686031edSTom Erickson } else { 4987*686031edSTom Erickson if ((cp->cache_complete_slab_count + avl_numnodes( 4988*686031edSTom Erickson &cp->cache_partial_slabs)) < kmem_frag_minslabs) { 4989*686031edSTom Erickson return (B_FALSE); 4990*686031edSTom Erickson } 4991*686031edSTom Erickson } 4992b5fca8f8Stomee 4993b5fca8f8Stomee nfree = cp->cache_bufslab; 4994*686031edSTom Erickson fragmented = ((avl_numnodes(&cp->cache_partial_slabs) > 1) && 4995*686031edSTom Erickson kmem_cache_frag_threshold(cp, nfree)); 4996*686031edSTom Erickson 4997b5fca8f8Stomee /* 4998b5fca8f8Stomee * Free buffers in the magazine layer appear allocated from the point of 4999b5fca8f8Stomee * view of the slab layer. We want to know if the slab layer would 5000b5fca8f8Stomee * appear fragmented if we included free buffers from magazines that 5001b5fca8f8Stomee * have fallen out of the working set. 5002b5fca8f8Stomee */ 5003b5fca8f8Stomee if (!fragmented) { 5004b5fca8f8Stomee long reap; 5005b5fca8f8Stomee 5006b5fca8f8Stomee mutex_enter(&cp->cache_depot_lock); 5007b5fca8f8Stomee reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min); 5008b5fca8f8Stomee reap = MIN(reap, cp->cache_full.ml_total); 5009b5fca8f8Stomee mutex_exit(&cp->cache_depot_lock); 5010b5fca8f8Stomee 5011b5fca8f8Stomee nfree += ((uint64_t)reap * cp->cache_magtype->mt_magsize); 5012b5fca8f8Stomee if (kmem_cache_frag_threshold(cp, nfree)) { 5013b5fca8f8Stomee *doreap = B_TRUE; 5014b5fca8f8Stomee } 5015b5fca8f8Stomee } 5016b5fca8f8Stomee 5017b5fca8f8Stomee return (fragmented); 5018b5fca8f8Stomee } 5019b5fca8f8Stomee 5020b5fca8f8Stomee /* Called periodically from kmem_taskq */ 5021b5fca8f8Stomee static void 5022b5fca8f8Stomee kmem_cache_scan(kmem_cache_t *cp) 5023b5fca8f8Stomee { 5024b5fca8f8Stomee boolean_t reap = B_FALSE; 5025*686031edSTom Erickson kmem_defrag_t *kmd; 5026b5fca8f8Stomee 5027b5fca8f8Stomee ASSERT(taskq_member(kmem_taskq, curthread)); 5028b5fca8f8Stomee 5029b5fca8f8Stomee mutex_enter(&cp->cache_lock); 5030b5fca8f8Stomee 5031*686031edSTom Erickson kmd = cp->cache_defrag; 5032*686031edSTom Erickson if (kmd->kmd_consolidate > 0) { 5033*686031edSTom Erickson kmd->kmd_consolidate--; 5034*686031edSTom Erickson mutex_exit(&cp->cache_lock); 5035*686031edSTom Erickson kmem_cache_reap(cp); 5036*686031edSTom Erickson return; 5037*686031edSTom Erickson } 5038*686031edSTom Erickson 5039b5fca8f8Stomee if (kmem_cache_is_fragmented(cp, &reap)) { 5040b5fca8f8Stomee size_t slabs_found; 5041b5fca8f8Stomee 5042b5fca8f8Stomee /* 5043b5fca8f8Stomee * Consolidate reclaimable slabs from the end of the partial 5044b5fca8f8Stomee * slab list (scan at most kmem_reclaim_scan_range slabs to find 5045b5fca8f8Stomee * reclaimable slabs). Keep track of how many candidate slabs we 5046b5fca8f8Stomee * looked for and how many we actually found so we can adjust 5047b5fca8f8Stomee * the definition of a candidate slab if we're having trouble 5048b5fca8f8Stomee * finding them. 5049b5fca8f8Stomee * 5050b5fca8f8Stomee * kmem_move_buffers() drops and reacquires cache_lock. 5051b5fca8f8Stomee */ 5052*686031edSTom Erickson KMEM_STAT_ADD(kmem_move_stats.kms_scans); 5053*686031edSTom Erickson kmd->kmd_scans++; 5054b5fca8f8Stomee slabs_found = kmem_move_buffers(cp, kmem_reclaim_scan_range, 5055b5fca8f8Stomee kmem_reclaim_max_slabs, 0); 5056b5fca8f8Stomee if (slabs_found >= 0) { 5057b5fca8f8Stomee kmd->kmd_slabs_sought += kmem_reclaim_max_slabs; 5058b5fca8f8Stomee kmd->kmd_slabs_found += slabs_found; 5059b5fca8f8Stomee } 5060b5fca8f8Stomee 5061*686031edSTom Erickson if (++kmd->kmd_tries >= kmem_reclaim_scan_range) { 5062*686031edSTom Erickson kmd->kmd_tries = 0; 5063b5fca8f8Stomee 5064b5fca8f8Stomee /* 5065b5fca8f8Stomee * If we had difficulty finding candidate slabs in 5066b5fca8f8Stomee * previous scans, adjust the threshold so that 5067b5fca8f8Stomee * candidates are easier to find. 5068b5fca8f8Stomee */ 5069b5fca8f8Stomee if (kmd->kmd_slabs_found == kmd->kmd_slabs_sought) { 5070b5fca8f8Stomee kmem_adjust_reclaim_threshold(kmd, -1); 5071b5fca8f8Stomee } else if ((kmd->kmd_slabs_found * 2) < 5072b5fca8f8Stomee kmd->kmd_slabs_sought) { 5073b5fca8f8Stomee kmem_adjust_reclaim_threshold(kmd, 1); 5074b5fca8f8Stomee } 5075b5fca8f8Stomee kmd->kmd_slabs_sought = 0; 5076b5fca8f8Stomee kmd->kmd_slabs_found = 0; 5077b5fca8f8Stomee } 5078b5fca8f8Stomee } else { 5079b5fca8f8Stomee kmem_reset_reclaim_threshold(cp->cache_defrag); 5080b5fca8f8Stomee #ifdef DEBUG 5081*686031edSTom Erickson if (!avl_is_empty(&cp->cache_partial_slabs)) { 5082b5fca8f8Stomee /* 5083b5fca8f8Stomee * In a debug kernel we want the consolidator to 5084b5fca8f8Stomee * run occasionally even when there is plenty of 5085b5fca8f8Stomee * memory. 5086b5fca8f8Stomee */ 5087*686031edSTom Erickson uint16_t debug_rand; 5088b5fca8f8Stomee 5089*686031edSTom Erickson (void) random_get_bytes((uint8_t *)&debug_rand, 2); 5090b5fca8f8Stomee if (!kmem_move_noreap && 5091b5fca8f8Stomee ((debug_rand % kmem_mtb_reap) == 0)) { 5092b5fca8f8Stomee mutex_exit(&cp->cache_lock); 5093b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_debug_reaps); 5094*686031edSTom Erickson kmem_cache_reap(cp); 5095b5fca8f8Stomee return; 5096b5fca8f8Stomee } else if ((debug_rand % kmem_mtb_move) == 0) { 5097*686031edSTom Erickson KMEM_STAT_ADD(kmem_move_stats.kms_scans); 5098*686031edSTom Erickson KMEM_STAT_ADD(kmem_move_stats.kms_debug_scans); 5099*686031edSTom Erickson kmd->kmd_scans++; 5100b5fca8f8Stomee (void) kmem_move_buffers(cp, 5101*686031edSTom Erickson kmem_reclaim_scan_range, 1, KMM_DEBUG); 5102b5fca8f8Stomee } 5103b5fca8f8Stomee } 5104b5fca8f8Stomee #endif /* DEBUG */ 5105b5fca8f8Stomee } 5106b5fca8f8Stomee 5107b5fca8f8Stomee mutex_exit(&cp->cache_lock); 5108b5fca8f8Stomee 5109b5fca8f8Stomee if (reap) { 5110b5fca8f8Stomee KMEM_STAT_ADD(kmem_move_stats.kms_scan_depot_ws_reaps); 5111b5fca8f8Stomee kmem_depot_ws_reap(cp); 5112b5fca8f8Stomee } 5113b5fca8f8Stomee } 5114