1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or http://www.opensolaris.org/os/licensing. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 22 /* 23 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. 24 * Copyright 2020 Joyent, Inc. 25 * Copyright 2015 Garrett D'Amore <garrett@damore.org> 26 */ 27 28 /* 29 * MAC Services Module 30 * 31 * The GLDv3 framework locking - The MAC layer 32 * -------------------------------------------- 33 * 34 * The MAC layer is central to the GLD framework and can provide the locking 35 * framework needed for itself and for the use of MAC clients. MAC end points 36 * are fairly disjoint and don't share a lot of state. So a coarse grained 37 * multi-threading scheme is to single thread all create/modify/delete or set 38 * type of control operations on a per mac end point while allowing data threads 39 * concurrently. 40 * 41 * Control operations (set) that modify a mac end point are always serialized on 42 * a per mac end point basis, We have at most 1 such thread per mac end point 43 * at a time. 44 * 45 * All other operations that are not serialized are essentially multi-threaded. 46 * For example a control operation (get) like getting statistics which may not 47 * care about reading values atomically or data threads sending or receiving 48 * data. Mostly these type of operations don't modify the control state. Any 49 * state these operations care about are protected using traditional locks. 50 * 51 * The perimeter only serializes serial operations. It does not imply there 52 * aren't any other concurrent operations. However a serialized operation may 53 * sometimes need to make sure it is the only thread. In this case it needs 54 * to use reference counting mechanisms to cv_wait until any current data 55 * threads are done. 56 * 57 * The mac layer itself does not hold any locks across a call to another layer. 58 * The perimeter is however held across a down call to the driver to make the 59 * whole control operation atomic with respect to other control operations. 60 * Also the data path and get type control operations may proceed concurrently. 61 * These operations synchronize with the single serial operation on a given mac 62 * end point using regular locks. The perimeter ensures that conflicting 63 * operations like say a mac_multicast_add and a mac_multicast_remove on the 64 * same mac end point don't interfere with each other and also ensures that the 65 * changes in the mac layer and the call to the underlying driver to say add a 66 * multicast address are done atomically without interference from a thread 67 * trying to delete the same address. 68 * 69 * For example, consider 70 * mac_multicst_add() 71 * { 72 * mac_perimeter_enter(); serialize all control operations 73 * 74 * grab list lock protect against access by data threads 75 * add to list 76 * drop list lock 77 * 78 * call driver's mi_multicst 79 * 80 * mac_perimeter_exit(); 81 * } 82 * 83 * To lessen the number of serialization locks and simplify the lock hierarchy, 84 * we serialize all the control operations on a per mac end point by using a 85 * single serialization lock called the perimeter. We allow recursive entry into 86 * the perimeter to facilitate use of this mechanism by both the mac client and 87 * the MAC layer itself. 88 * 89 * MAC client means an entity that does an operation on a mac handle 90 * obtained from a mac_open/mac_client_open. Similarly MAC driver means 91 * an entity that does an operation on a mac handle obtained from a 92 * mac_register. An entity could be both client and driver but on different 93 * handles eg. aggr. and should only make the corresponding mac interface calls 94 * i.e. mac driver interface or mac client interface as appropriate for that 95 * mac handle. 96 * 97 * General rules. 98 * ------------- 99 * 100 * R1. The lock order of upcall threads is natually opposite to downcall 101 * threads. Hence upcalls must not hold any locks across layers for fear of 102 * recursive lock enter and lock order violation. This applies to all layers. 103 * 104 * R2. The perimeter is just another lock. Since it is held in the down 105 * direction, acquiring the perimeter in an upcall is prohibited as it would 106 * cause a deadlock. This applies to all layers. 107 * 108 * Note that upcalls that need to grab the mac perimeter (for example 109 * mac_notify upcalls) can still achieve that by posting the request to a 110 * thread, which can then grab all the required perimeters and locks in the 111 * right global order. Note that in the above example the mac layer iself 112 * won't grab the mac perimeter in the mac_notify upcall, instead the upcall 113 * to the client must do that. Please see the aggr code for an example. 114 * 115 * MAC client rules 116 * ---------------- 117 * 118 * R3. A MAC client may use the MAC provided perimeter facility to serialize 119 * control operations on a per mac end point. It does this by by acquring 120 * and holding the perimeter across a sequence of calls to the mac layer. 121 * This ensures atomicity across the entire block of mac calls. In this 122 * model the MAC client must not hold any client locks across the calls to 123 * the mac layer. This model is the preferred solution. 124 * 125 * R4. However if a MAC client has a lot of global state across all mac end 126 * points the per mac end point serialization may not be sufficient. In this 127 * case the client may choose to use global locks or use its own serialization. 128 * To avoid deadlocks, these client layer locks held across the mac calls 129 * in the control path must never be acquired by the data path for the reason 130 * mentioned below. 131 * 132 * (Assume that a control operation that holds a client lock blocks in the 133 * mac layer waiting for upcall reference counts to drop to zero. If an upcall 134 * data thread that holds this reference count, tries to acquire the same 135 * client lock subsequently it will deadlock). 136 * 137 * A MAC client may follow either the R3 model or the R4 model, but can't 138 * mix both. In the former, the hierarchy is Perim -> client locks, but in 139 * the latter it is client locks -> Perim. 140 * 141 * R5. MAC clients must make MAC calls (excluding data calls) in a cv_wait'able 142 * context since they may block while trying to acquire the perimeter. 143 * In addition some calls may block waiting for upcall refcnts to come down to 144 * zero. 145 * 146 * R6. MAC clients must make sure that they are single threaded and all threads 147 * from the top (in particular data threads) have finished before calling 148 * mac_client_close. The MAC framework does not track the number of client 149 * threads using the mac client handle. Also mac clients must make sure 150 * they have undone all the control operations before calling mac_client_close. 151 * For example mac_unicast_remove/mac_multicast_remove to undo the corresponding 152 * mac_unicast_add/mac_multicast_add. 153 * 154 * MAC framework rules 155 * ------------------- 156 * 157 * R7. The mac layer itself must not hold any mac layer locks (except the mac 158 * perimeter) across a call to any other layer from the mac layer. The call to 159 * any other layer could be via mi_* entry points, classifier entry points into 160 * the driver or via upcall pointers into layers above. The mac perimeter may 161 * be acquired or held only in the down direction, for e.g. when calling into 162 * a mi_* driver enty point to provide atomicity of the operation. 163 * 164 * R8. Since it is not guaranteed (see R14) that drivers won't hold locks across 165 * mac driver interfaces, the MAC layer must provide a cut out for control 166 * interfaces like upcall notifications and start them in a separate thread. 167 * 168 * R9. Note that locking order also implies a plumbing order. For example 169 * VNICs are allowed to be created over aggrs, but not vice-versa. An attempt 170 * to plumb in any other order must be failed at mac_open time, otherwise it 171 * could lead to deadlocks due to inverse locking order. 172 * 173 * R10. MAC driver interfaces must not block since the driver could call them 174 * in interrupt context. 175 * 176 * R11. Walkers must preferably not hold any locks while calling walker 177 * callbacks. Instead these can operate on reference counts. In simple 178 * callbacks it may be ok to hold a lock and call the callbacks, but this is 179 * harder to maintain in the general case of arbitrary callbacks. 180 * 181 * R12. The MAC layer must protect upcall notification callbacks using reference 182 * counts rather than holding locks across the callbacks. 183 * 184 * R13. Given the variety of drivers, it is preferable if the MAC layer can make 185 * sure that any pointers (such as mac ring pointers) it passes to the driver 186 * remain valid until mac unregister time. Currently the mac layer achieves 187 * this by using generation numbers for rings and freeing the mac rings only 188 * at unregister time. The MAC layer must provide a layer of indirection and 189 * must not expose underlying driver rings or driver data structures/pointers 190 * directly to MAC clients. 191 * 192 * MAC driver rules 193 * ---------------- 194 * 195 * R14. It would be preferable if MAC drivers don't hold any locks across any 196 * mac call. However at a minimum they must not hold any locks across data 197 * upcalls. They must also make sure that all references to mac data structures 198 * are cleaned up and that it is single threaded at mac_unregister time. 199 * 200 * R15. MAC driver interfaces don't block and so the action may be done 201 * asynchronously in a separate thread as for example handling notifications. 202 * The driver must not assume that the action is complete when the call 203 * returns. 204 * 205 * R16. Drivers must maintain a generation number per Rx ring, and pass it 206 * back to mac_rx_ring(); They are expected to increment the generation 207 * number whenever the ring's stop routine is invoked. 208 * See comments in mac_rx_ring(); 209 * 210 * R17 Similarly mi_stop is another synchronization point and the driver must 211 * ensure that all upcalls are done and there won't be any future upcall 212 * before returning from mi_stop. 213 * 214 * R18. The driver may assume that all set/modify control operations via 215 * the mi_* entry points are single threaded on a per mac end point. 216 * 217 * Lock and Perimeter hierarchy scenarios 218 * --------------------------------------- 219 * 220 * i_mac_impl_lock -> mi_rw_lock -> srs_lock -> s_ring_lock[i_mac_tx_srs_notify] 221 * 222 * ft_lock -> fe_lock [mac_flow_lookup] 223 * 224 * mi_rw_lock -> fe_lock [mac_bcast_send] 225 * 226 * srs_lock -> mac_bw_lock [mac_rx_srs_drain_bw] 227 * 228 * cpu_lock -> mac_srs_g_lock -> srs_lock -> s_ring_lock [mac_walk_srs_and_bind] 229 * 230 * i_dls_devnet_lock -> mac layer locks [dls_devnet_rename] 231 * 232 * Perimeters are ordered P1 -> P2 -> P3 from top to bottom in order of mac 233 * client to driver. In the case of clients that explictly use the mac provided 234 * perimeter mechanism for its serialization, the hierarchy is 235 * Perimeter -> mac layer locks, since the client never holds any locks across 236 * the mac calls. In the case of clients that use its own locks the hierarchy 237 * is Client locks -> Mac Perim -> Mac layer locks. The client never explicitly 238 * calls mac_perim_enter/exit in this case. 239 * 240 * Subflow creation rules 241 * --------------------------- 242 * o In case of a user specified cpulist present on underlying link and flows, 243 * the flows cpulist must be a subset of the underlying link. 244 * o In case of a user specified fanout mode present on link and flow, the 245 * subflow fanout count has to be less than or equal to that of the 246 * underlying link. The cpu-bindings for the subflows will be a subset of 247 * the underlying link. 248 * o In case if no cpulist specified on both underlying link and flow, the 249 * underlying link relies on a MAC tunable to provide out of box fanout. 250 * The subflow will have no cpulist (the subflow will be unbound) 251 * o In case if no cpulist is specified on the underlying link, a subflow can 252 * carry either a user-specified cpulist or fanout count. The cpu-bindings 253 * for the subflow will not adhere to restriction that they need to be subset 254 * of the underlying link. 255 * o In case where the underlying link is carrying either a user specified 256 * cpulist or fanout mode and for a unspecified subflow, the subflow will be 257 * created unbound. 258 * o While creating unbound subflows, bandwidth mode changes attempt to 259 * figure a right fanout count. In such cases the fanout count will override 260 * the unbound cpu-binding behavior. 261 * o In addition to this, while cycling between flow and link properties, we 262 * impose a restriction that if a link property has a subflow with 263 * user-specified attributes, we will not allow changing the link property. 264 * The administrator needs to reset all the user specified properties for the 265 * subflows before attempting a link property change. 266 * Some of the above rules can be overridden by specifying additional command 267 * line options while creating or modifying link or subflow properties. 268 * 269 * Datapath 270 * -------- 271 * 272 * For information on the datapath, the world of soft rings, hardware rings, how 273 * it is structured, and the path of an mblk_t between a driver and a mac 274 * client, see mac_sched.c. 275 */ 276 277 #include <sys/types.h> 278 #include <sys/conf.h> 279 #include <sys/id_space.h> 280 #include <sys/esunddi.h> 281 #include <sys/stat.h> 282 #include <sys/mkdev.h> 283 #include <sys/stream.h> 284 #include <sys/strsun.h> 285 #include <sys/strsubr.h> 286 #include <sys/dlpi.h> 287 #include <sys/list.h> 288 #include <sys/modhash.h> 289 #include <sys/mac_provider.h> 290 #include <sys/mac_client_impl.h> 291 #include <sys/mac_soft_ring.h> 292 #include <sys/mac_stat.h> 293 #include <sys/mac_impl.h> 294 #include <sys/mac.h> 295 #include <sys/dls.h> 296 #include <sys/dld.h> 297 #include <sys/modctl.h> 298 #include <sys/fs/dv_node.h> 299 #include <sys/thread.h> 300 #include <sys/proc.h> 301 #include <sys/callb.h> 302 #include <sys/cpuvar.h> 303 #include <sys/atomic.h> 304 #include <sys/bitmap.h> 305 #include <sys/sdt.h> 306 #include <sys/mac_flow.h> 307 #include <sys/ddi_intr_impl.h> 308 #include <sys/disp.h> 309 #include <sys/sdt.h> 310 #include <sys/vnic.h> 311 #include <sys/vnic_impl.h> 312 #include <sys/vlan.h> 313 #include <inet/ip.h> 314 #include <inet/ip6.h> 315 #include <sys/exacct.h> 316 #include <sys/exacct_impl.h> 317 #include <inet/nd.h> 318 #include <sys/ethernet.h> 319 #include <sys/pool.h> 320 #include <sys/pool_pset.h> 321 #include <sys/cpupart.h> 322 #include <inet/wifi_ioctl.h> 323 #include <net/wpa.h> 324 325 #define IMPL_HASHSZ 67 /* prime */ 326 327 kmem_cache_t *i_mac_impl_cachep; 328 mod_hash_t *i_mac_impl_hash; 329 krwlock_t i_mac_impl_lock; 330 uint_t i_mac_impl_count; 331 static kmem_cache_t *mac_ring_cache; 332 static id_space_t *minor_ids; 333 static uint32_t minor_count; 334 static pool_event_cb_t mac_pool_event_reg; 335 336 /* 337 * Logging stuff. Perhaps mac_logging_interval could be broken into 338 * mac_flow_log_interval and mac_link_log_interval if we want to be 339 * able to schedule them differently. 340 */ 341 uint_t mac_logging_interval; 342 boolean_t mac_flow_log_enable; 343 boolean_t mac_link_log_enable; 344 timeout_id_t mac_logging_timer; 345 346 #define MACTYPE_KMODDIR "mac" 347 #define MACTYPE_HASHSZ 67 348 static mod_hash_t *i_mactype_hash; 349 /* 350 * i_mactype_lock synchronizes threads that obtain references to mactype_t 351 * structures through i_mactype_getplugin(). 352 */ 353 static kmutex_t i_mactype_lock; 354 355 /* 356 * mac_tx_percpu_cnt 357 * 358 * Number of per cpu locks per mac_client_impl_t. Used by the transmit side 359 * in mac_tx to reduce lock contention. This is sized at boot time in mac_init. 360 * mac_tx_percpu_cnt_max is settable in /etc/system and must be a power of 2. 361 * Per cpu locks may be disabled by setting mac_tx_percpu_cnt_max to 1. 362 */ 363 int mac_tx_percpu_cnt; 364 int mac_tx_percpu_cnt_max = 128; 365 366 /* 367 * Call back functions for the bridge module. These are guaranteed to be valid 368 * when holding a reference on a link or when holding mip->mi_bridge_lock and 369 * mi_bridge_link is non-NULL. 370 */ 371 mac_bridge_tx_t mac_bridge_tx_cb; 372 mac_bridge_rx_t mac_bridge_rx_cb; 373 mac_bridge_ref_t mac_bridge_ref_cb; 374 mac_bridge_ls_t mac_bridge_ls_cb; 375 376 static int i_mac_constructor(void *, void *, int); 377 static void i_mac_destructor(void *, void *); 378 static int i_mac_ring_ctor(void *, void *, int); 379 static void i_mac_ring_dtor(void *, void *); 380 static mblk_t *mac_rx_classify(mac_impl_t *, mac_resource_handle_t, mblk_t *); 381 void mac_tx_client_flush(mac_client_impl_t *); 382 void mac_tx_client_block(mac_client_impl_t *); 383 static void mac_rx_ring_quiesce(mac_ring_t *, uint_t); 384 static int mac_start_group_and_rings(mac_group_t *); 385 static void mac_stop_group_and_rings(mac_group_t *); 386 static void mac_pool_event_cb(pool_event_t, int, void *); 387 388 typedef struct netinfo_s { 389 list_node_t ni_link; 390 void *ni_record; 391 int ni_size; 392 int ni_type; 393 } netinfo_t; 394 395 /* 396 * Module initialization functions. 397 */ 398 399 void 400 mac_init(void) 401 { 402 mac_tx_percpu_cnt = ((boot_max_ncpus == -1) ? max_ncpus : 403 boot_max_ncpus); 404 405 /* Upper bound is mac_tx_percpu_cnt_max */ 406 if (mac_tx_percpu_cnt > mac_tx_percpu_cnt_max) 407 mac_tx_percpu_cnt = mac_tx_percpu_cnt_max; 408 409 if (mac_tx_percpu_cnt < 1) { 410 /* Someone set max_tx_percpu_cnt_max to 0 or less */ 411 mac_tx_percpu_cnt = 1; 412 } 413 414 ASSERT(mac_tx_percpu_cnt >= 1); 415 mac_tx_percpu_cnt = (1 << highbit(mac_tx_percpu_cnt - 1)); 416 /* 417 * Make it of the form 2**N - 1 in the range 418 * [0 .. mac_tx_percpu_cnt_max - 1] 419 */ 420 mac_tx_percpu_cnt--; 421 422 i_mac_impl_cachep = kmem_cache_create("mac_impl_cache", 423 sizeof (mac_impl_t), 0, i_mac_constructor, i_mac_destructor, 424 NULL, NULL, NULL, 0); 425 ASSERT(i_mac_impl_cachep != NULL); 426 427 mac_ring_cache = kmem_cache_create("mac_ring_cache", 428 sizeof (mac_ring_t), 0, i_mac_ring_ctor, i_mac_ring_dtor, NULL, 429 NULL, NULL, 0); 430 ASSERT(mac_ring_cache != NULL); 431 432 i_mac_impl_hash = mod_hash_create_extended("mac_impl_hash", 433 IMPL_HASHSZ, mod_hash_null_keydtor, mod_hash_null_valdtor, 434 mod_hash_bystr, NULL, mod_hash_strkey_cmp, KM_SLEEP); 435 rw_init(&i_mac_impl_lock, NULL, RW_DEFAULT, NULL); 436 437 mac_flow_init(); 438 mac_soft_ring_init(); 439 mac_bcast_init(); 440 mac_client_init(); 441 442 i_mac_impl_count = 0; 443 444 i_mactype_hash = mod_hash_create_extended("mactype_hash", 445 MACTYPE_HASHSZ, 446 mod_hash_null_keydtor, mod_hash_null_valdtor, 447 mod_hash_bystr, NULL, mod_hash_strkey_cmp, KM_SLEEP); 448 449 /* 450 * Allocate an id space to manage minor numbers. The range of the 451 * space will be from MAC_MAX_MINOR+1 to MAC_PRIVATE_MINOR-1. This 452 * leaves half of the 32-bit minors available for driver private use. 453 */ 454 minor_ids = id_space_create("mac_minor_ids", MAC_MAX_MINOR+1, 455 MAC_PRIVATE_MINOR-1); 456 ASSERT(minor_ids != NULL); 457 minor_count = 0; 458 459 /* Let's default to 20 seconds */ 460 mac_logging_interval = 20; 461 mac_flow_log_enable = B_FALSE; 462 mac_link_log_enable = B_FALSE; 463 mac_logging_timer = NULL; 464 465 /* Register to be notified of noteworthy pools events */ 466 mac_pool_event_reg.pec_func = mac_pool_event_cb; 467 mac_pool_event_reg.pec_arg = NULL; 468 pool_event_cb_register(&mac_pool_event_reg); 469 } 470 471 int 472 mac_fini(void) 473 { 474 475 if (i_mac_impl_count > 0 || minor_count > 0) 476 return (EBUSY); 477 478 pool_event_cb_unregister(&mac_pool_event_reg); 479 480 id_space_destroy(minor_ids); 481 mac_flow_fini(); 482 483 mod_hash_destroy_hash(i_mac_impl_hash); 484 rw_destroy(&i_mac_impl_lock); 485 486 mac_client_fini(); 487 kmem_cache_destroy(mac_ring_cache); 488 489 mod_hash_destroy_hash(i_mactype_hash); 490 mac_soft_ring_finish(); 491 492 493 return (0); 494 } 495 496 /* 497 * Initialize a GLDv3 driver's device ops. A driver that manages its own ops 498 * (e.g. softmac) may pass in a NULL ops argument. 499 */ 500 void 501 mac_init_ops(struct dev_ops *ops, const char *name) 502 { 503 major_t major = ddi_name_to_major((char *)name); 504 505 /* 506 * By returning on error below, we are not letting the driver continue 507 * in an undefined context. The mac_register() function will faill if 508 * DN_GLDV3_DRIVER isn't set. 509 */ 510 if (major == DDI_MAJOR_T_NONE) 511 return; 512 LOCK_DEV_OPS(&devnamesp[major].dn_lock); 513 devnamesp[major].dn_flags |= (DN_GLDV3_DRIVER | DN_NETWORK_DRIVER); 514 UNLOCK_DEV_OPS(&devnamesp[major].dn_lock); 515 if (ops != NULL) 516 dld_init_ops(ops, name); 517 } 518 519 void 520 mac_fini_ops(struct dev_ops *ops) 521 { 522 dld_fini_ops(ops); 523 } 524 525 /*ARGSUSED*/ 526 static int 527 i_mac_constructor(void *buf, void *arg, int kmflag) 528 { 529 mac_impl_t *mip = buf; 530 531 bzero(buf, sizeof (mac_impl_t)); 532 533 mip->mi_linkstate = LINK_STATE_UNKNOWN; 534 535 rw_init(&mip->mi_rw_lock, NULL, RW_DRIVER, NULL); 536 mutex_init(&mip->mi_notify_lock, NULL, MUTEX_DRIVER, NULL); 537 mutex_init(&mip->mi_promisc_lock, NULL, MUTEX_DRIVER, NULL); 538 mutex_init(&mip->mi_ring_lock, NULL, MUTEX_DEFAULT, NULL); 539 540 mip->mi_notify_cb_info.mcbi_lockp = &mip->mi_notify_lock; 541 cv_init(&mip->mi_notify_cb_info.mcbi_cv, NULL, CV_DRIVER, NULL); 542 mip->mi_promisc_cb_info.mcbi_lockp = &mip->mi_promisc_lock; 543 cv_init(&mip->mi_promisc_cb_info.mcbi_cv, NULL, CV_DRIVER, NULL); 544 545 mutex_init(&mip->mi_bridge_lock, NULL, MUTEX_DEFAULT, NULL); 546 547 return (0); 548 } 549 550 /*ARGSUSED*/ 551 static void 552 i_mac_destructor(void *buf, void *arg) 553 { 554 mac_impl_t *mip = buf; 555 mac_cb_info_t *mcbi; 556 557 ASSERT(mip->mi_ref == 0); 558 ASSERT(mip->mi_active == 0); 559 ASSERT(mip->mi_linkstate == LINK_STATE_UNKNOWN); 560 ASSERT(mip->mi_devpromisc == 0); 561 ASSERT(mip->mi_ksp == NULL); 562 ASSERT(mip->mi_kstat_count == 0); 563 ASSERT(mip->mi_nclients == 0); 564 ASSERT(mip->mi_nactiveclients == 0); 565 ASSERT(mip->mi_single_active_client == NULL); 566 ASSERT(mip->mi_state_flags == 0); 567 ASSERT(mip->mi_factory_addr == NULL); 568 ASSERT(mip->mi_factory_addr_num == 0); 569 ASSERT(mip->mi_default_tx_ring == NULL); 570 571 mcbi = &mip->mi_notify_cb_info; 572 ASSERT(mcbi->mcbi_del_cnt == 0 && mcbi->mcbi_walker_cnt == 0); 573 ASSERT(mip->mi_notify_bits == 0); 574 ASSERT(mip->mi_notify_thread == NULL); 575 ASSERT(mcbi->mcbi_lockp == &mip->mi_notify_lock); 576 mcbi->mcbi_lockp = NULL; 577 578 mcbi = &mip->mi_promisc_cb_info; 579 ASSERT(mcbi->mcbi_del_cnt == 0 && mip->mi_promisc_list == NULL); 580 ASSERT(mip->mi_promisc_list == NULL); 581 ASSERT(mcbi->mcbi_lockp == &mip->mi_promisc_lock); 582 mcbi->mcbi_lockp = NULL; 583 584 ASSERT(mip->mi_bcast_ngrps == 0 && mip->mi_bcast_grp == NULL); 585 ASSERT(mip->mi_perim_owner == NULL && mip->mi_perim_ocnt == 0); 586 587 rw_destroy(&mip->mi_rw_lock); 588 589 mutex_destroy(&mip->mi_promisc_lock); 590 cv_destroy(&mip->mi_promisc_cb_info.mcbi_cv); 591 mutex_destroy(&mip->mi_notify_lock); 592 cv_destroy(&mip->mi_notify_cb_info.mcbi_cv); 593 mutex_destroy(&mip->mi_ring_lock); 594 595 ASSERT(mip->mi_bridge_link == NULL); 596 } 597 598 /* ARGSUSED */ 599 static int 600 i_mac_ring_ctor(void *buf, void *arg, int kmflag) 601 { 602 mac_ring_t *ring = (mac_ring_t *)buf; 603 604 bzero(ring, sizeof (mac_ring_t)); 605 cv_init(&ring->mr_cv, NULL, CV_DEFAULT, NULL); 606 mutex_init(&ring->mr_lock, NULL, MUTEX_DEFAULT, NULL); 607 ring->mr_state = MR_FREE; 608 return (0); 609 } 610 611 /* ARGSUSED */ 612 static void 613 i_mac_ring_dtor(void *buf, void *arg) 614 { 615 mac_ring_t *ring = (mac_ring_t *)buf; 616 617 cv_destroy(&ring->mr_cv); 618 mutex_destroy(&ring->mr_lock); 619 } 620 621 /* 622 * Common functions to do mac callback addition and deletion. Currently this is 623 * used by promisc callbacks and notify callbacks. List addition and deletion 624 * need to take care of list walkers. List walkers in general, can't hold list 625 * locks and make upcall callbacks due to potential lock order and recursive 626 * reentry issues. Instead list walkers increment the list walker count to mark 627 * the presence of a walker thread. Addition can be carefully done to ensure 628 * that the list walker always sees either the old list or the new list. 629 * However the deletion can't be done while the walker is active, instead the 630 * deleting thread simply marks the entry as logically deleted. The last walker 631 * physically deletes and frees up the logically deleted entries when the walk 632 * is complete. 633 */ 634 void 635 mac_callback_add(mac_cb_info_t *mcbi, mac_cb_t **mcb_head, 636 mac_cb_t *mcb_elem) 637 { 638 mac_cb_t *p; 639 mac_cb_t **pp; 640 641 /* Verify it is not already in the list */ 642 for (pp = mcb_head; (p = *pp) != NULL; pp = &p->mcb_nextp) { 643 if (p == mcb_elem) 644 break; 645 } 646 VERIFY(p == NULL); 647 648 /* 649 * Add it to the head of the callback list. The membar ensures that 650 * the following list pointer manipulations reach global visibility 651 * in exactly the program order below. 652 */ 653 ASSERT(MUTEX_HELD(mcbi->mcbi_lockp)); 654 655 mcb_elem->mcb_nextp = *mcb_head; 656 membar_producer(); 657 *mcb_head = mcb_elem; 658 } 659 660 /* 661 * Mark the entry as logically deleted. If there aren't any walkers unlink 662 * from the list. In either case return the corresponding status. 663 */ 664 boolean_t 665 mac_callback_remove(mac_cb_info_t *mcbi, mac_cb_t **mcb_head, 666 mac_cb_t *mcb_elem) 667 { 668 mac_cb_t *p; 669 mac_cb_t **pp; 670 671 ASSERT(MUTEX_HELD(mcbi->mcbi_lockp)); 672 /* 673 * Search the callback list for the entry to be removed 674 */ 675 for (pp = mcb_head; (p = *pp) != NULL; pp = &p->mcb_nextp) { 676 if (p == mcb_elem) 677 break; 678 } 679 VERIFY(p != NULL); 680 681 /* 682 * If there are walkers just mark it as deleted and the last walker 683 * will remove from the list and free it. 684 */ 685 if (mcbi->mcbi_walker_cnt != 0) { 686 p->mcb_flags |= MCB_CONDEMNED; 687 mcbi->mcbi_del_cnt++; 688 return (B_FALSE); 689 } 690 691 ASSERT(mcbi->mcbi_del_cnt == 0); 692 *pp = p->mcb_nextp; 693 p->mcb_nextp = NULL; 694 return (B_TRUE); 695 } 696 697 /* 698 * Wait for all pending callback removals to be completed 699 */ 700 void 701 mac_callback_remove_wait(mac_cb_info_t *mcbi) 702 { 703 ASSERT(MUTEX_HELD(mcbi->mcbi_lockp)); 704 while (mcbi->mcbi_del_cnt != 0) { 705 DTRACE_PROBE1(need_wait, mac_cb_info_t *, mcbi); 706 cv_wait(&mcbi->mcbi_cv, mcbi->mcbi_lockp); 707 } 708 } 709 710 void 711 mac_callback_barrier(mac_cb_info_t *mcbi) 712 { 713 ASSERT(MUTEX_HELD(mcbi->mcbi_lockp)); 714 ASSERT3U(mcbi->mcbi_barrier_cnt, <, UINT_MAX); 715 716 if (mcbi->mcbi_walker_cnt == 0) { 717 return; 718 } 719 720 mcbi->mcbi_barrier_cnt++; 721 do { 722 cv_wait(&mcbi->mcbi_cv, mcbi->mcbi_lockp); 723 } while (mcbi->mcbi_walker_cnt > 0); 724 mcbi->mcbi_barrier_cnt--; 725 cv_broadcast(&mcbi->mcbi_cv); 726 } 727 728 void 729 mac_callback_walker_enter(mac_cb_info_t *mcbi) 730 { 731 mutex_enter(mcbi->mcbi_lockp); 732 /* 733 * Incoming walkers should give precedence to timely clean-up of 734 * deleted callback entries and requested barriers. 735 */ 736 while (mcbi->mcbi_del_cnt > 0 || mcbi->mcbi_barrier_cnt > 0) { 737 cv_wait(&mcbi->mcbi_cv, mcbi->mcbi_lockp); 738 } 739 mcbi->mcbi_walker_cnt++; 740 mutex_exit(mcbi->mcbi_lockp); 741 } 742 743 /* 744 * The last mac callback walker does the cleanup. Walk the list and unlik 745 * all the logically deleted entries and construct a temporary list of 746 * removed entries. Return the list of removed entries to the caller. 747 */ 748 static mac_cb_t * 749 mac_callback_walker_cleanup(mac_cb_info_t *mcbi, mac_cb_t **mcb_head) 750 { 751 mac_cb_t *p; 752 mac_cb_t **pp; 753 mac_cb_t *rmlist = NULL; /* List of removed elements */ 754 int cnt = 0; 755 756 ASSERT(MUTEX_HELD(mcbi->mcbi_lockp)); 757 ASSERT(mcbi->mcbi_del_cnt != 0 && mcbi->mcbi_walker_cnt == 0); 758 759 pp = mcb_head; 760 while (*pp != NULL) { 761 if ((*pp)->mcb_flags & MCB_CONDEMNED) { 762 p = *pp; 763 *pp = p->mcb_nextp; 764 p->mcb_nextp = rmlist; 765 rmlist = p; 766 cnt++; 767 continue; 768 } 769 pp = &(*pp)->mcb_nextp; 770 } 771 772 ASSERT(mcbi->mcbi_del_cnt == cnt); 773 mcbi->mcbi_del_cnt = 0; 774 return (rmlist); 775 } 776 777 void 778 mac_callback_walker_exit(mac_cb_info_t *mcbi, mac_cb_t **headp, 779 boolean_t is_promisc) 780 { 781 boolean_t do_wake = B_FALSE; 782 783 mutex_enter(mcbi->mcbi_lockp); 784 785 /* If walkers remain, nothing more can be done for now */ 786 if (--mcbi->mcbi_walker_cnt != 0) { 787 mutex_exit(mcbi->mcbi_lockp); 788 return; 789 } 790 791 if (mcbi->mcbi_del_cnt != 0) { 792 mac_cb_t *rmlist; 793 794 rmlist = mac_callback_walker_cleanup(mcbi, headp); 795 796 if (!is_promisc) { 797 /* The "normal" non-promisc callback clean-up */ 798 mac_callback_free(rmlist); 799 } else { 800 mac_cb_t *mcb, *mcb_next; 801 802 /* 803 * The promisc callbacks are in 2 lists, one off the 804 * 'mip' and another off the 'mcip' threaded by 805 * mpi_mi_link and mpi_mci_link respectively. There 806 * is, however, only a single shared total walker 807 * count, and an entry cannot be physically unlinked if 808 * a walker is active on either list. The last walker 809 * does this cleanup of logically deleted entries. 810 * 811 * With a list of callbacks deleted from above from 812 * mi_promisc_list (headp), remove the corresponding 813 * entry from mci_promisc_list (headp_pair) and free 814 * the structure. 815 */ 816 for (mcb = rmlist; mcb != NULL; mcb = mcb_next) { 817 mac_promisc_impl_t *mpip; 818 mac_client_impl_t *mcip; 819 820 mcb_next = mcb->mcb_nextp; 821 mpip = (mac_promisc_impl_t *)mcb->mcb_objp; 822 mcip = mpip->mpi_mcip; 823 824 ASSERT3P(&mcip->mci_mip->mi_promisc_cb_info, 825 ==, mcbi); 826 ASSERT3P(&mcip->mci_mip->mi_promisc_list, 827 ==, headp); 828 829 VERIFY(mac_callback_remove(mcbi, 830 &mcip->mci_promisc_list, 831 &mpip->mpi_mci_link)); 832 mcb->mcb_flags = 0; 833 mcb->mcb_nextp = NULL; 834 kmem_cache_free(mac_promisc_impl_cache, mpip); 835 } 836 } 837 838 /* 839 * Wake any walker threads that could be waiting in 840 * mac_callback_walker_enter() until deleted items have been 841 * cleaned from the list. 842 */ 843 do_wake = B_TRUE; 844 } 845 846 if (mcbi->mcbi_barrier_cnt != 0) { 847 /* 848 * One or more threads are waiting for all walkers to exit the 849 * callback list. Notify them, now that the list is clear. 850 */ 851 do_wake = B_TRUE; 852 } 853 854 if (do_wake) { 855 cv_broadcast(&mcbi->mcbi_cv); 856 } 857 mutex_exit(mcbi->mcbi_lockp); 858 } 859 860 static boolean_t 861 mac_callback_lookup(mac_cb_t **mcb_headp, mac_cb_t *mcb_elem) 862 { 863 mac_cb_t *mcb; 864 865 /* Verify it is not already in the list */ 866 for (mcb = *mcb_headp; mcb != NULL; mcb = mcb->mcb_nextp) { 867 if (mcb == mcb_elem) 868 return (B_TRUE); 869 } 870 871 return (B_FALSE); 872 } 873 874 static boolean_t 875 mac_callback_find(mac_cb_info_t *mcbi, mac_cb_t **mcb_headp, mac_cb_t *mcb_elem) 876 { 877 boolean_t found; 878 879 mutex_enter(mcbi->mcbi_lockp); 880 found = mac_callback_lookup(mcb_headp, mcb_elem); 881 mutex_exit(mcbi->mcbi_lockp); 882 883 return (found); 884 } 885 886 /* Free the list of removed callbacks */ 887 void 888 mac_callback_free(mac_cb_t *rmlist) 889 { 890 mac_cb_t *mcb; 891 mac_cb_t *mcb_next; 892 893 for (mcb = rmlist; mcb != NULL; mcb = mcb_next) { 894 mcb_next = mcb->mcb_nextp; 895 kmem_free(mcb->mcb_objp, mcb->mcb_objsize); 896 } 897 } 898 899 void 900 i_mac_notify(mac_impl_t *mip, mac_notify_type_t type) 901 { 902 mac_cb_info_t *mcbi; 903 904 /* 905 * Signal the notify thread even after mi_ref has become zero and 906 * mi_disabled is set. The synchronization with the notify thread 907 * happens in mac_unregister and that implies the driver must make 908 * sure it is single-threaded (with respect to mac calls) and that 909 * all pending mac calls have returned before it calls mac_unregister 910 */ 911 rw_enter(&i_mac_impl_lock, RW_READER); 912 if (mip->mi_state_flags & MIS_DISABLED) 913 goto exit; 914 915 /* 916 * Guard against incorrect notifications. (Running a newer 917 * mac client against an older implementation?) 918 */ 919 if (type >= MAC_NNOTE) 920 goto exit; 921 922 mcbi = &mip->mi_notify_cb_info; 923 mutex_enter(mcbi->mcbi_lockp); 924 mip->mi_notify_bits |= (1 << type); 925 cv_broadcast(&mcbi->mcbi_cv); 926 mutex_exit(mcbi->mcbi_lockp); 927 928 exit: 929 rw_exit(&i_mac_impl_lock); 930 } 931 932 /* 933 * Mac serialization primitives. Please see the block comment at the 934 * top of the file. 935 */ 936 void 937 i_mac_perim_enter(mac_impl_t *mip) 938 { 939 mac_client_impl_t *mcip; 940 941 if (mip->mi_state_flags & MIS_IS_VNIC) { 942 /* 943 * This is a VNIC. Return the lower mac since that is what 944 * we want to serialize on. 945 */ 946 mcip = mac_vnic_lower(mip); 947 mip = mcip->mci_mip; 948 } 949 950 mutex_enter(&mip->mi_perim_lock); 951 if (mip->mi_perim_owner == curthread) { 952 mip->mi_perim_ocnt++; 953 mutex_exit(&mip->mi_perim_lock); 954 return; 955 } 956 957 while (mip->mi_perim_owner != NULL) 958 cv_wait(&mip->mi_perim_cv, &mip->mi_perim_lock); 959 960 mip->mi_perim_owner = curthread; 961 ASSERT(mip->mi_perim_ocnt == 0); 962 mip->mi_perim_ocnt++; 963 #ifdef DEBUG 964 mip->mi_perim_stack_depth = getpcstack(mip->mi_perim_stack, 965 MAC_PERIM_STACK_DEPTH); 966 #endif 967 mutex_exit(&mip->mi_perim_lock); 968 } 969 970 int 971 i_mac_perim_enter_nowait(mac_impl_t *mip) 972 { 973 /* 974 * The vnic is a special case, since the serialization is done based 975 * on the lower mac. If the lower mac is busy, it does not imply the 976 * vnic can't be unregistered. But in the case of other drivers, 977 * a busy perimeter or open mac handles implies that the mac is busy 978 * and can't be unregistered. 979 */ 980 if (mip->mi_state_flags & MIS_IS_VNIC) { 981 i_mac_perim_enter(mip); 982 return (0); 983 } 984 985 mutex_enter(&mip->mi_perim_lock); 986 if (mip->mi_perim_owner != NULL) { 987 mutex_exit(&mip->mi_perim_lock); 988 return (EBUSY); 989 } 990 ASSERT(mip->mi_perim_ocnt == 0); 991 mip->mi_perim_owner = curthread; 992 mip->mi_perim_ocnt++; 993 mutex_exit(&mip->mi_perim_lock); 994 995 return (0); 996 } 997 998 void 999 i_mac_perim_exit(mac_impl_t *mip) 1000 { 1001 mac_client_impl_t *mcip; 1002 1003 if (mip->mi_state_flags & MIS_IS_VNIC) { 1004 /* 1005 * This is a VNIC. Return the lower mac since that is what 1006 * we want to serialize on. 1007 */ 1008 mcip = mac_vnic_lower(mip); 1009 mip = mcip->mci_mip; 1010 } 1011 1012 ASSERT(mip->mi_perim_owner == curthread && mip->mi_perim_ocnt != 0); 1013 1014 mutex_enter(&mip->mi_perim_lock); 1015 if (--mip->mi_perim_ocnt == 0) { 1016 mip->mi_perim_owner = NULL; 1017 cv_signal(&mip->mi_perim_cv); 1018 } 1019 mutex_exit(&mip->mi_perim_lock); 1020 } 1021 1022 /* 1023 * Returns whether the current thread holds the mac perimeter. Used in making 1024 * assertions. 1025 */ 1026 boolean_t 1027 mac_perim_held(mac_handle_t mh) 1028 { 1029 mac_impl_t *mip = (mac_impl_t *)mh; 1030 mac_client_impl_t *mcip; 1031 1032 if (mip->mi_state_flags & MIS_IS_VNIC) { 1033 /* 1034 * This is a VNIC. Return the lower mac since that is what 1035 * we want to serialize on. 1036 */ 1037 mcip = mac_vnic_lower(mip); 1038 mip = mcip->mci_mip; 1039 } 1040 return (mip->mi_perim_owner == curthread); 1041 } 1042 1043 /* 1044 * mac client interfaces to enter the mac perimeter of a mac end point, given 1045 * its mac handle, or macname or linkid. 1046 */ 1047 void 1048 mac_perim_enter_by_mh(mac_handle_t mh, mac_perim_handle_t *mphp) 1049 { 1050 mac_impl_t *mip = (mac_impl_t *)mh; 1051 1052 i_mac_perim_enter(mip); 1053 /* 1054 * The mac_perim_handle_t returned encodes the 'mip' and whether a 1055 * mac_open has been done internally while entering the perimeter. 1056 * This information is used in mac_perim_exit 1057 */ 1058 MAC_ENCODE_MPH(*mphp, mip, 0); 1059 } 1060 1061 int 1062 mac_perim_enter_by_macname(const char *name, mac_perim_handle_t *mphp) 1063 { 1064 int err; 1065 mac_handle_t mh; 1066 1067 if ((err = mac_open(name, &mh)) != 0) 1068 return (err); 1069 1070 mac_perim_enter_by_mh(mh, mphp); 1071 MAC_ENCODE_MPH(*mphp, mh, 1); 1072 return (0); 1073 } 1074 1075 int 1076 mac_perim_enter_by_linkid(datalink_id_t linkid, mac_perim_handle_t *mphp) 1077 { 1078 int err; 1079 mac_handle_t mh; 1080 1081 if ((err = mac_open_by_linkid(linkid, &mh)) != 0) 1082 return (err); 1083 1084 mac_perim_enter_by_mh(mh, mphp); 1085 MAC_ENCODE_MPH(*mphp, mh, 1); 1086 return (0); 1087 } 1088 1089 void 1090 mac_perim_exit(mac_perim_handle_t mph) 1091 { 1092 mac_impl_t *mip; 1093 boolean_t need_close; 1094 1095 MAC_DECODE_MPH(mph, mip, need_close); 1096 i_mac_perim_exit(mip); 1097 if (need_close) 1098 mac_close((mac_handle_t)mip); 1099 } 1100 1101 int 1102 mac_hold(const char *macname, mac_impl_t **pmip) 1103 { 1104 mac_impl_t *mip; 1105 int err; 1106 1107 /* 1108 * Check the device name length to make sure it won't overflow our 1109 * buffer. 1110 */ 1111 if (strlen(macname) >= MAXNAMELEN) 1112 return (EINVAL); 1113 1114 /* 1115 * Look up its entry in the global hash table. 1116 */ 1117 rw_enter(&i_mac_impl_lock, RW_WRITER); 1118 err = mod_hash_find(i_mac_impl_hash, (mod_hash_key_t)macname, 1119 (mod_hash_val_t *)&mip); 1120 1121 if (err != 0) { 1122 rw_exit(&i_mac_impl_lock); 1123 return (ENOENT); 1124 } 1125 1126 if (mip->mi_state_flags & MIS_DISABLED) { 1127 rw_exit(&i_mac_impl_lock); 1128 return (ENOENT); 1129 } 1130 1131 if (mip->mi_state_flags & MIS_EXCLUSIVE_HELD) { 1132 rw_exit(&i_mac_impl_lock); 1133 return (EBUSY); 1134 } 1135 1136 mip->mi_ref++; 1137 rw_exit(&i_mac_impl_lock); 1138 1139 *pmip = mip; 1140 return (0); 1141 } 1142 1143 void 1144 mac_rele(mac_impl_t *mip) 1145 { 1146 rw_enter(&i_mac_impl_lock, RW_WRITER); 1147 ASSERT(mip->mi_ref != 0); 1148 if (--mip->mi_ref == 0) { 1149 ASSERT(mip->mi_nactiveclients == 0 && 1150 !(mip->mi_state_flags & MIS_EXCLUSIVE)); 1151 } 1152 rw_exit(&i_mac_impl_lock); 1153 } 1154 1155 /* 1156 * Private GLDv3 function to start a MAC instance. 1157 */ 1158 int 1159 mac_start(mac_handle_t mh) 1160 { 1161 mac_impl_t *mip = (mac_impl_t *)mh; 1162 int err = 0; 1163 mac_group_t *defgrp; 1164 1165 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 1166 ASSERT(mip->mi_start != NULL); 1167 1168 /* 1169 * Check whether the device is already started. 1170 */ 1171 if (mip->mi_active++ == 0) { 1172 mac_ring_t *ring = NULL; 1173 1174 /* 1175 * Start the device. 1176 */ 1177 err = mip->mi_start(mip->mi_driver); 1178 if (err != 0) { 1179 mip->mi_active--; 1180 return (err); 1181 } 1182 1183 /* 1184 * Start the default tx ring. 1185 */ 1186 if (mip->mi_default_tx_ring != NULL) { 1187 1188 ring = (mac_ring_t *)mip->mi_default_tx_ring; 1189 if (ring->mr_state != MR_INUSE) { 1190 err = mac_start_ring(ring); 1191 if (err != 0) { 1192 mip->mi_active--; 1193 return (err); 1194 } 1195 } 1196 } 1197 1198 if ((defgrp = MAC_DEFAULT_RX_GROUP(mip)) != NULL) { 1199 /* 1200 * Start the default group which is responsible 1201 * for receiving broadcast and multicast 1202 * traffic for both primary and non-primary 1203 * MAC clients. 1204 */ 1205 ASSERT(defgrp->mrg_state == MAC_GROUP_STATE_REGISTERED); 1206 err = mac_start_group_and_rings(defgrp); 1207 if (err != 0) { 1208 mip->mi_active--; 1209 if ((ring != NULL) && 1210 (ring->mr_state == MR_INUSE)) 1211 mac_stop_ring(ring); 1212 return (err); 1213 } 1214 mac_set_group_state(defgrp, MAC_GROUP_STATE_SHARED); 1215 } 1216 } 1217 1218 return (err); 1219 } 1220 1221 /* 1222 * Private GLDv3 function to stop a MAC instance. 1223 */ 1224 void 1225 mac_stop(mac_handle_t mh) 1226 { 1227 mac_impl_t *mip = (mac_impl_t *)mh; 1228 mac_group_t *grp; 1229 1230 ASSERT(mip->mi_stop != NULL); 1231 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 1232 1233 /* 1234 * Check whether the device is still needed. 1235 */ 1236 ASSERT(mip->mi_active != 0); 1237 if (--mip->mi_active == 0) { 1238 if ((grp = MAC_DEFAULT_RX_GROUP(mip)) != NULL) { 1239 /* 1240 * There should be no more active clients since the 1241 * MAC is being stopped. Stop the default RX group 1242 * and transition it back to registered state. 1243 * 1244 * When clients are torn down, the groups 1245 * are release via mac_release_rx_group which 1246 * knows the the default group is always in 1247 * started mode since broadcast uses it. So 1248 * we can assert that their are no clients 1249 * (since mac_bcast_add doesn't register itself 1250 * as a client) and group is in SHARED state. 1251 */ 1252 ASSERT(grp->mrg_state == MAC_GROUP_STATE_SHARED); 1253 ASSERT(MAC_GROUP_NO_CLIENT(grp) && 1254 mip->mi_nactiveclients == 0); 1255 mac_stop_group_and_rings(grp); 1256 mac_set_group_state(grp, MAC_GROUP_STATE_REGISTERED); 1257 } 1258 1259 if (mip->mi_default_tx_ring != NULL) { 1260 mac_ring_t *ring; 1261 1262 ring = (mac_ring_t *)mip->mi_default_tx_ring; 1263 if (ring->mr_state == MR_INUSE) { 1264 mac_stop_ring(ring); 1265 ring->mr_flag = 0; 1266 } 1267 } 1268 1269 /* 1270 * Stop the device. 1271 */ 1272 mip->mi_stop(mip->mi_driver); 1273 } 1274 } 1275 1276 int 1277 i_mac_promisc_set(mac_impl_t *mip, boolean_t on) 1278 { 1279 int err = 0; 1280 1281 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 1282 ASSERT(mip->mi_setpromisc != NULL); 1283 1284 if (on) { 1285 /* 1286 * Enable promiscuous mode on the device if not yet enabled. 1287 */ 1288 if (mip->mi_devpromisc++ == 0) { 1289 err = mip->mi_setpromisc(mip->mi_driver, B_TRUE); 1290 if (err != 0) { 1291 mip->mi_devpromisc--; 1292 return (err); 1293 } 1294 i_mac_notify(mip, MAC_NOTE_DEVPROMISC); 1295 } 1296 } else { 1297 if (mip->mi_devpromisc == 0) 1298 return (EPROTO); 1299 1300 /* 1301 * Disable promiscuous mode on the device if this is the last 1302 * enabling. 1303 */ 1304 if (--mip->mi_devpromisc == 0) { 1305 err = mip->mi_setpromisc(mip->mi_driver, B_FALSE); 1306 if (err != 0) { 1307 mip->mi_devpromisc++; 1308 return (err); 1309 } 1310 i_mac_notify(mip, MAC_NOTE_DEVPROMISC); 1311 } 1312 } 1313 1314 return (0); 1315 } 1316 1317 /* 1318 * The promiscuity state can change any time. If the caller needs to take 1319 * actions that are atomic with the promiscuity state, then the caller needs 1320 * to bracket the entire sequence with mac_perim_enter/exit 1321 */ 1322 boolean_t 1323 mac_promisc_get(mac_handle_t mh) 1324 { 1325 mac_impl_t *mip = (mac_impl_t *)mh; 1326 1327 /* 1328 * Return the current promiscuity. 1329 */ 1330 return (mip->mi_devpromisc != 0); 1331 } 1332 1333 /* 1334 * Invoked at MAC instance attach time to initialize the list 1335 * of factory MAC addresses supported by a MAC instance. This function 1336 * builds a local cache in the mac_impl_t for the MAC addresses 1337 * supported by the underlying hardware. The MAC clients themselves 1338 * use the mac_addr_factory*() functions to query and reserve 1339 * factory MAC addresses. 1340 */ 1341 void 1342 mac_addr_factory_init(mac_impl_t *mip) 1343 { 1344 mac_capab_multifactaddr_t capab; 1345 uint8_t *addr; 1346 int i; 1347 1348 /* 1349 * First round to see how many factory MAC addresses are available. 1350 */ 1351 bzero(&capab, sizeof (capab)); 1352 if (!i_mac_capab_get((mac_handle_t)mip, MAC_CAPAB_MULTIFACTADDR, 1353 &capab) || (capab.mcm_naddr == 0)) { 1354 /* 1355 * The MAC instance doesn't support multiple factory 1356 * MAC addresses, we're done here. 1357 */ 1358 return; 1359 } 1360 1361 /* 1362 * Allocate the space and get all the factory addresses. 1363 */ 1364 addr = kmem_alloc(capab.mcm_naddr * MAXMACADDRLEN, KM_SLEEP); 1365 capab.mcm_getaddr(mip->mi_driver, capab.mcm_naddr, addr); 1366 1367 mip->mi_factory_addr_num = capab.mcm_naddr; 1368 mip->mi_factory_addr = kmem_zalloc(mip->mi_factory_addr_num * 1369 sizeof (mac_factory_addr_t), KM_SLEEP); 1370 1371 for (i = 0; i < capab.mcm_naddr; i++) { 1372 bcopy(addr + i * MAXMACADDRLEN, 1373 mip->mi_factory_addr[i].mfa_addr, 1374 mip->mi_type->mt_addr_length); 1375 mip->mi_factory_addr[i].mfa_in_use = B_FALSE; 1376 } 1377 1378 kmem_free(addr, capab.mcm_naddr * MAXMACADDRLEN); 1379 } 1380 1381 void 1382 mac_addr_factory_fini(mac_impl_t *mip) 1383 { 1384 if (mip->mi_factory_addr == NULL) { 1385 ASSERT(mip->mi_factory_addr_num == 0); 1386 return; 1387 } 1388 1389 kmem_free(mip->mi_factory_addr, mip->mi_factory_addr_num * 1390 sizeof (mac_factory_addr_t)); 1391 1392 mip->mi_factory_addr = NULL; 1393 mip->mi_factory_addr_num = 0; 1394 } 1395 1396 /* 1397 * Reserve a factory MAC address. If *slot is set to -1, the function 1398 * attempts to reserve any of the available factory MAC addresses and 1399 * returns the reserved slot id. If no slots are available, the function 1400 * returns ENOSPC. If *slot is not set to -1, the function reserves 1401 * the specified slot if it is available, or returns EBUSY is the slot 1402 * is already used. Returns ENOTSUP if the underlying MAC does not 1403 * support multiple factory addresses. If the slot number is not -1 but 1404 * is invalid, returns EINVAL. 1405 */ 1406 int 1407 mac_addr_factory_reserve(mac_client_handle_t mch, int *slot) 1408 { 1409 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 1410 mac_impl_t *mip = mcip->mci_mip; 1411 int i, ret = 0; 1412 1413 i_mac_perim_enter(mip); 1414 /* 1415 * Protect against concurrent readers that may need a self-consistent 1416 * view of the factory addresses 1417 */ 1418 rw_enter(&mip->mi_rw_lock, RW_WRITER); 1419 1420 if (mip->mi_factory_addr_num == 0) { 1421 ret = ENOTSUP; 1422 goto bail; 1423 } 1424 1425 if (*slot != -1) { 1426 /* check the specified slot */ 1427 if (*slot < 1 || *slot > mip->mi_factory_addr_num) { 1428 ret = EINVAL; 1429 goto bail; 1430 } 1431 if (mip->mi_factory_addr[*slot-1].mfa_in_use) { 1432 ret = EBUSY; 1433 goto bail; 1434 } 1435 } else { 1436 /* pick the next available slot */ 1437 for (i = 0; i < mip->mi_factory_addr_num; i++) { 1438 if (!mip->mi_factory_addr[i].mfa_in_use) 1439 break; 1440 } 1441 1442 if (i == mip->mi_factory_addr_num) { 1443 ret = ENOSPC; 1444 goto bail; 1445 } 1446 *slot = i+1; 1447 } 1448 1449 mip->mi_factory_addr[*slot-1].mfa_in_use = B_TRUE; 1450 mip->mi_factory_addr[*slot-1].mfa_client = mcip; 1451 1452 bail: 1453 rw_exit(&mip->mi_rw_lock); 1454 i_mac_perim_exit(mip); 1455 return (ret); 1456 } 1457 1458 /* 1459 * Release the specified factory MAC address slot. 1460 */ 1461 void 1462 mac_addr_factory_release(mac_client_handle_t mch, uint_t slot) 1463 { 1464 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 1465 mac_impl_t *mip = mcip->mci_mip; 1466 1467 i_mac_perim_enter(mip); 1468 /* 1469 * Protect against concurrent readers that may need a self-consistent 1470 * view of the factory addresses 1471 */ 1472 rw_enter(&mip->mi_rw_lock, RW_WRITER); 1473 1474 ASSERT(slot > 0 && slot <= mip->mi_factory_addr_num); 1475 ASSERT(mip->mi_factory_addr[slot-1].mfa_in_use); 1476 1477 mip->mi_factory_addr[slot-1].mfa_in_use = B_FALSE; 1478 1479 rw_exit(&mip->mi_rw_lock); 1480 i_mac_perim_exit(mip); 1481 } 1482 1483 /* 1484 * Stores in mac_addr the value of the specified MAC address. Returns 1485 * 0 on success, or EINVAL if the slot number is not valid for the MAC. 1486 * The caller must provide a string of at least MAXNAMELEN bytes. 1487 */ 1488 void 1489 mac_addr_factory_value(mac_handle_t mh, int slot, uchar_t *mac_addr, 1490 uint_t *addr_len, char *client_name, boolean_t *in_use_arg) 1491 { 1492 mac_impl_t *mip = (mac_impl_t *)mh; 1493 boolean_t in_use; 1494 1495 ASSERT(slot > 0 && slot <= mip->mi_factory_addr_num); 1496 1497 /* 1498 * Readers need to hold mi_rw_lock. Writers need to hold mac perimeter 1499 * and mi_rw_lock 1500 */ 1501 rw_enter(&mip->mi_rw_lock, RW_READER); 1502 bcopy(mip->mi_factory_addr[slot-1].mfa_addr, mac_addr, MAXMACADDRLEN); 1503 *addr_len = mip->mi_type->mt_addr_length; 1504 in_use = mip->mi_factory_addr[slot-1].mfa_in_use; 1505 if (in_use && client_name != NULL) { 1506 bcopy(mip->mi_factory_addr[slot-1].mfa_client->mci_name, 1507 client_name, MAXNAMELEN); 1508 } 1509 if (in_use_arg != NULL) 1510 *in_use_arg = in_use; 1511 rw_exit(&mip->mi_rw_lock); 1512 } 1513 1514 /* 1515 * Returns the number of factory MAC addresses (in addition to the 1516 * primary MAC address), 0 if the underlying MAC doesn't support 1517 * that feature. 1518 */ 1519 uint_t 1520 mac_addr_factory_num(mac_handle_t mh) 1521 { 1522 mac_impl_t *mip = (mac_impl_t *)mh; 1523 1524 return (mip->mi_factory_addr_num); 1525 } 1526 1527 1528 void 1529 mac_rx_group_unmark(mac_group_t *grp, uint_t flag) 1530 { 1531 mac_ring_t *ring; 1532 1533 for (ring = grp->mrg_rings; ring != NULL; ring = ring->mr_next) 1534 ring->mr_flag &= ~flag; 1535 } 1536 1537 /* 1538 * The following mac_hwrings_xxx() functions are private mac client functions 1539 * used by the aggr driver to access and control the underlying HW Rx group 1540 * and rings. In this case, the aggr driver has exclusive control of the 1541 * underlying HW Rx group/rings, it calls the following functions to 1542 * start/stop the HW Rx rings, disable/enable polling, add/remove MAC 1543 * addresses, or set up the Rx callback. 1544 */ 1545 /* ARGSUSED */ 1546 static void 1547 mac_hwrings_rx_process(void *arg, mac_resource_handle_t srs, 1548 mblk_t *mp_chain, boolean_t loopback) 1549 { 1550 mac_soft_ring_set_t *mac_srs = (mac_soft_ring_set_t *)srs; 1551 mac_srs_rx_t *srs_rx = &mac_srs->srs_rx; 1552 mac_direct_rx_t proc; 1553 void *arg1; 1554 mac_resource_handle_t arg2; 1555 1556 proc = srs_rx->sr_func; 1557 arg1 = srs_rx->sr_arg1; 1558 arg2 = mac_srs->srs_mrh; 1559 1560 proc(arg1, arg2, mp_chain, NULL); 1561 } 1562 1563 /* 1564 * This function is called to get the list of HW rings that are reserved by 1565 * an exclusive mac client. 1566 * 1567 * Return value: the number of HW rings. 1568 */ 1569 int 1570 mac_hwrings_get(mac_client_handle_t mch, mac_group_handle_t *hwgh, 1571 mac_ring_handle_t *hwrh, mac_ring_type_t rtype) 1572 { 1573 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 1574 flow_entry_t *flent = mcip->mci_flent; 1575 mac_group_t *grp; 1576 mac_ring_t *ring; 1577 int cnt = 0; 1578 1579 if (rtype == MAC_RING_TYPE_RX) { 1580 grp = flent->fe_rx_ring_group; 1581 } else if (rtype == MAC_RING_TYPE_TX) { 1582 grp = flent->fe_tx_ring_group; 1583 } else { 1584 ASSERT(B_FALSE); 1585 return (-1); 1586 } 1587 1588 /* 1589 * The MAC client did not reserve an Rx group, return directly. 1590 * This is probably because the underlying MAC does not support 1591 * any groups. 1592 */ 1593 if (hwgh != NULL) 1594 *hwgh = NULL; 1595 if (grp == NULL) 1596 return (0); 1597 /* 1598 * This group must be reserved by this MAC client. 1599 */ 1600 ASSERT((grp->mrg_state == MAC_GROUP_STATE_RESERVED) && 1601 (mcip == MAC_GROUP_ONLY_CLIENT(grp))); 1602 1603 for (ring = grp->mrg_rings; ring != NULL; ring = ring->mr_next, cnt++) { 1604 ASSERT(cnt < MAX_RINGS_PER_GROUP); 1605 hwrh[cnt] = (mac_ring_handle_t)ring; 1606 } 1607 if (hwgh != NULL) 1608 *hwgh = (mac_group_handle_t)grp; 1609 1610 return (cnt); 1611 } 1612 1613 /* 1614 * Get the HW ring handles of the given group index. If the MAC 1615 * doesn't have a group at this index, or any groups at all, then 0 is 1616 * returned and hwgh is set to NULL. This is a private client API. The 1617 * MAC perimeter must be held when calling this function. 1618 * 1619 * mh: A handle to the MAC that owns the group. 1620 * 1621 * idx: The index of the HW group to be read. 1622 * 1623 * hwgh: If non-NULL, contains a handle to the HW group on return. 1624 * 1625 * hwrh: An array of ring handles pointing to the HW rings in the 1626 * group. The array must be large enough to hold a handle to each ring 1627 * in the group. To be safe, this array should be of size MAX_RINGS_PER_GROUP. 1628 * 1629 * rtype: Used to determine if we are fetching Rx or Tx rings. 1630 * 1631 * Returns the number of rings in the group. 1632 */ 1633 uint_t 1634 mac_hwrings_idx_get(mac_handle_t mh, uint_t idx, mac_group_handle_t *hwgh, 1635 mac_ring_handle_t *hwrh, mac_ring_type_t rtype) 1636 { 1637 mac_impl_t *mip = (mac_impl_t *)mh; 1638 mac_group_t *grp; 1639 mac_ring_t *ring; 1640 uint_t cnt = 0; 1641 1642 /* 1643 * The MAC perimeter must be held when accessing the 1644 * mi_{rx,tx}_groups fields. 1645 */ 1646 ASSERT(MAC_PERIM_HELD(mh)); 1647 ASSERT(rtype == MAC_RING_TYPE_RX || rtype == MAC_RING_TYPE_TX); 1648 1649 if (rtype == MAC_RING_TYPE_RX) { 1650 grp = mip->mi_rx_groups; 1651 } else { 1652 ASSERT(rtype == MAC_RING_TYPE_TX); 1653 grp = mip->mi_tx_groups; 1654 } 1655 1656 while (grp != NULL && grp->mrg_index != idx) 1657 grp = grp->mrg_next; 1658 1659 /* 1660 * If the MAC doesn't have a group at this index or doesn't 1661 * impelement RINGS capab, then set hwgh to NULL and return 0. 1662 */ 1663 if (hwgh != NULL) 1664 *hwgh = NULL; 1665 1666 if (grp == NULL) 1667 return (0); 1668 1669 ASSERT3U(idx, ==, grp->mrg_index); 1670 1671 for (ring = grp->mrg_rings; ring != NULL; ring = ring->mr_next, cnt++) { 1672 ASSERT3U(cnt, <, MAX_RINGS_PER_GROUP); 1673 hwrh[cnt] = (mac_ring_handle_t)ring; 1674 } 1675 1676 /* A group should always have at least one ring. */ 1677 ASSERT3U(cnt, >, 0); 1678 1679 if (hwgh != NULL) 1680 *hwgh = (mac_group_handle_t)grp; 1681 1682 return (cnt); 1683 } 1684 1685 /* 1686 * This function is called to get info about Tx/Rx rings. 1687 * 1688 * Return value: returns uint_t which will have various bits set 1689 * that indicates different properties of the ring. 1690 */ 1691 uint_t 1692 mac_hwring_getinfo(mac_ring_handle_t rh) 1693 { 1694 mac_ring_t *ring = (mac_ring_t *)rh; 1695 mac_ring_info_t *info = &ring->mr_info; 1696 1697 return (info->mri_flags); 1698 } 1699 1700 /* 1701 * Set the passthru callback on the hardware ring. 1702 */ 1703 void 1704 mac_hwring_set_passthru(mac_ring_handle_t hwrh, mac_rx_t fn, void *arg1, 1705 mac_resource_handle_t arg2) 1706 { 1707 mac_ring_t *hwring = (mac_ring_t *)hwrh; 1708 1709 ASSERT3S(hwring->mr_type, ==, MAC_RING_TYPE_RX); 1710 1711 hwring->mr_classify_type = MAC_PASSTHRU_CLASSIFIER; 1712 1713 hwring->mr_pt_fn = fn; 1714 hwring->mr_pt_arg1 = arg1; 1715 hwring->mr_pt_arg2 = arg2; 1716 } 1717 1718 /* 1719 * Clear the passthru callback on the hardware ring. 1720 */ 1721 void 1722 mac_hwring_clear_passthru(mac_ring_handle_t hwrh) 1723 { 1724 mac_ring_t *hwring = (mac_ring_t *)hwrh; 1725 1726 ASSERT3S(hwring->mr_type, ==, MAC_RING_TYPE_RX); 1727 1728 hwring->mr_classify_type = MAC_NO_CLASSIFIER; 1729 1730 hwring->mr_pt_fn = NULL; 1731 hwring->mr_pt_arg1 = NULL; 1732 hwring->mr_pt_arg2 = NULL; 1733 } 1734 1735 void 1736 mac_client_set_flow_cb(mac_client_handle_t mch, mac_rx_t func, void *arg1) 1737 { 1738 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 1739 flow_entry_t *flent = mcip->mci_flent; 1740 1741 mutex_enter(&flent->fe_lock); 1742 flent->fe_cb_fn = (flow_fn_t)func; 1743 flent->fe_cb_arg1 = arg1; 1744 flent->fe_cb_arg2 = NULL; 1745 flent->fe_flags &= ~FE_MC_NO_DATAPATH; 1746 mutex_exit(&flent->fe_lock); 1747 } 1748 1749 void 1750 mac_client_clear_flow_cb(mac_client_handle_t mch) 1751 { 1752 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 1753 flow_entry_t *flent = mcip->mci_flent; 1754 1755 mutex_enter(&flent->fe_lock); 1756 flent->fe_cb_fn = (flow_fn_t)mac_rx_def; 1757 flent->fe_cb_arg1 = NULL; 1758 flent->fe_cb_arg2 = NULL; 1759 flent->fe_flags |= FE_MC_NO_DATAPATH; 1760 mutex_exit(&flent->fe_lock); 1761 } 1762 1763 /* 1764 * Export ddi interrupt handles from the HW ring to the pseudo ring and 1765 * setup the RX callback of the mac client which exclusively controls 1766 * HW ring. 1767 */ 1768 void 1769 mac_hwring_setup(mac_ring_handle_t hwrh, mac_resource_handle_t prh, 1770 mac_ring_handle_t pseudo_rh) 1771 { 1772 mac_ring_t *hw_ring = (mac_ring_t *)hwrh; 1773 mac_ring_t *pseudo_ring; 1774 mac_soft_ring_set_t *mac_srs = hw_ring->mr_srs; 1775 1776 if (pseudo_rh != NULL) { 1777 pseudo_ring = (mac_ring_t *)pseudo_rh; 1778 /* Export the ddi handles to pseudo ring */ 1779 pseudo_ring->mr_info.mri_intr.mi_ddi_handle = 1780 hw_ring->mr_info.mri_intr.mi_ddi_handle; 1781 pseudo_ring->mr_info.mri_intr.mi_ddi_shared = 1782 hw_ring->mr_info.mri_intr.mi_ddi_shared; 1783 /* 1784 * Save a pointer to pseudo ring in the hw ring. If 1785 * interrupt handle changes, the hw ring will be 1786 * notified of the change (see mac_ring_intr_set()) 1787 * and the appropriate change has to be made to 1788 * the pseudo ring that has exported the ddi handle. 1789 */ 1790 hw_ring->mr_prh = pseudo_rh; 1791 } 1792 1793 if (hw_ring->mr_type == MAC_RING_TYPE_RX) { 1794 ASSERT(!(mac_srs->srs_type & SRST_TX)); 1795 mac_srs->srs_mrh = prh; 1796 mac_srs->srs_rx.sr_lower_proc = mac_hwrings_rx_process; 1797 } 1798 } 1799 1800 void 1801 mac_hwring_teardown(mac_ring_handle_t hwrh) 1802 { 1803 mac_ring_t *hw_ring = (mac_ring_t *)hwrh; 1804 mac_soft_ring_set_t *mac_srs; 1805 1806 if (hw_ring == NULL) 1807 return; 1808 hw_ring->mr_prh = NULL; 1809 if (hw_ring->mr_type == MAC_RING_TYPE_RX) { 1810 mac_srs = hw_ring->mr_srs; 1811 ASSERT(!(mac_srs->srs_type & SRST_TX)); 1812 mac_srs->srs_rx.sr_lower_proc = mac_rx_srs_process; 1813 mac_srs->srs_mrh = NULL; 1814 } 1815 } 1816 1817 int 1818 mac_hwring_disable_intr(mac_ring_handle_t rh) 1819 { 1820 mac_ring_t *rr_ring = (mac_ring_t *)rh; 1821 mac_intr_t *intr = &rr_ring->mr_info.mri_intr; 1822 1823 return (intr->mi_disable(intr->mi_handle)); 1824 } 1825 1826 int 1827 mac_hwring_enable_intr(mac_ring_handle_t rh) 1828 { 1829 mac_ring_t *rr_ring = (mac_ring_t *)rh; 1830 mac_intr_t *intr = &rr_ring->mr_info.mri_intr; 1831 1832 return (intr->mi_enable(intr->mi_handle)); 1833 } 1834 1835 /* 1836 * Start the HW ring pointed to by rh. 1837 * 1838 * This is used by special MAC clients that are MAC themselves and 1839 * need to exert control over the underlying HW rings of the NIC. 1840 */ 1841 int 1842 mac_hwring_start(mac_ring_handle_t rh) 1843 { 1844 mac_ring_t *rr_ring = (mac_ring_t *)rh; 1845 int rv = 0; 1846 1847 if (rr_ring->mr_state != MR_INUSE) 1848 rv = mac_start_ring(rr_ring); 1849 1850 return (rv); 1851 } 1852 1853 /* 1854 * Stop the HW ring pointed to by rh. Also see mac_hwring_start(). 1855 */ 1856 void 1857 mac_hwring_stop(mac_ring_handle_t rh) 1858 { 1859 mac_ring_t *rr_ring = (mac_ring_t *)rh; 1860 1861 if (rr_ring->mr_state != MR_FREE) 1862 mac_stop_ring(rr_ring); 1863 } 1864 1865 /* 1866 * Remove the quiesced flag from the HW ring pointed to by rh. 1867 * 1868 * This is used by special MAC clients that are MAC themselves and 1869 * need to exert control over the underlying HW rings of the NIC. 1870 */ 1871 int 1872 mac_hwring_activate(mac_ring_handle_t rh) 1873 { 1874 mac_ring_t *rr_ring = (mac_ring_t *)rh; 1875 1876 MAC_RING_UNMARK(rr_ring, MR_QUIESCE); 1877 return (0); 1878 } 1879 1880 /* 1881 * Quiesce the HW ring pointed to by rh. Also see mac_hwring_activate(). 1882 */ 1883 void 1884 mac_hwring_quiesce(mac_ring_handle_t rh) 1885 { 1886 mac_ring_t *rr_ring = (mac_ring_t *)rh; 1887 1888 mac_rx_ring_quiesce(rr_ring, MR_QUIESCE); 1889 } 1890 1891 mblk_t * 1892 mac_hwring_poll(mac_ring_handle_t rh, int bytes_to_pickup) 1893 { 1894 mac_ring_t *rr_ring = (mac_ring_t *)rh; 1895 mac_ring_info_t *info = &rr_ring->mr_info; 1896 1897 return (info->mri_poll(info->mri_driver, bytes_to_pickup)); 1898 } 1899 1900 /* 1901 * Send packets through a selected tx ring. 1902 */ 1903 mblk_t * 1904 mac_hwring_tx(mac_ring_handle_t rh, mblk_t *mp) 1905 { 1906 mac_ring_t *ring = (mac_ring_t *)rh; 1907 mac_ring_info_t *info = &ring->mr_info; 1908 1909 ASSERT(ring->mr_type == MAC_RING_TYPE_TX && 1910 ring->mr_state >= MR_INUSE); 1911 return (info->mri_tx(info->mri_driver, mp)); 1912 } 1913 1914 /* 1915 * Query stats for a particular rx/tx ring 1916 */ 1917 int 1918 mac_hwring_getstat(mac_ring_handle_t rh, uint_t stat, uint64_t *val) 1919 { 1920 mac_ring_t *ring = (mac_ring_t *)rh; 1921 mac_ring_info_t *info = &ring->mr_info; 1922 1923 return (info->mri_stat(info->mri_driver, stat, val)); 1924 } 1925 1926 /* 1927 * Private function that is only used by aggr to send packets through 1928 * a port/Tx ring. Since aggr exposes a pseudo Tx ring even for ports 1929 * that does not expose Tx rings, aggr_ring_tx() entry point needs 1930 * access to mac_impl_t to send packets through m_tx() entry point. 1931 * It accomplishes this by calling mac_hwring_send_priv() function. 1932 */ 1933 mblk_t * 1934 mac_hwring_send_priv(mac_client_handle_t mch, mac_ring_handle_t rh, mblk_t *mp) 1935 { 1936 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 1937 mac_impl_t *mip = mcip->mci_mip; 1938 1939 return (mac_provider_tx(mip, rh, mp, mcip)); 1940 } 1941 1942 /* 1943 * Private function that is only used by aggr to update the default transmission 1944 * ring. Because aggr exposes a pseudo Tx ring even for ports that may 1945 * temporarily be down, it may need to update the default ring that is used by 1946 * MAC such that it refers to a link that can actively be used to send traffic. 1947 * Note that this is different from the case where the port has been removed 1948 * from the group. In those cases, all of the rings will be torn down because 1949 * the ring will no longer exist. It's important to give aggr a case where the 1950 * rings can still exist such that it may be able to continue to send LACP PDUs 1951 * to potentially restore the link. 1952 * 1953 * Finally, we explicitly don't do anything if the ring hasn't been enabled yet. 1954 * This is to help out aggr which doesn't really know the internal state that 1955 * MAC does about the rings and can't know that it's not quite ready for use 1956 * yet. 1957 */ 1958 void 1959 mac_hwring_set_default(mac_handle_t mh, mac_ring_handle_t rh) 1960 { 1961 mac_impl_t *mip = (mac_impl_t *)mh; 1962 mac_ring_t *ring = (mac_ring_t *)rh; 1963 1964 ASSERT(MAC_PERIM_HELD(mh)); 1965 VERIFY(mip->mi_state_flags & MIS_IS_AGGR); 1966 1967 if (ring->mr_state != MR_INUSE) 1968 return; 1969 1970 mip->mi_default_tx_ring = rh; 1971 } 1972 1973 int 1974 mac_hwgroup_addmac(mac_group_handle_t gh, const uint8_t *addr) 1975 { 1976 mac_group_t *group = (mac_group_t *)gh; 1977 1978 return (mac_group_addmac(group, addr)); 1979 } 1980 1981 int 1982 mac_hwgroup_remmac(mac_group_handle_t gh, const uint8_t *addr) 1983 { 1984 mac_group_t *group = (mac_group_t *)gh; 1985 1986 return (mac_group_remmac(group, addr)); 1987 } 1988 1989 /* 1990 * Program the group's HW VLAN filter if it has such support. 1991 * Otherwise, the group will implicitly accept tagged traffic and 1992 * there is nothing to do. 1993 */ 1994 int 1995 mac_hwgroup_addvlan(mac_group_handle_t gh, uint16_t vid) 1996 { 1997 mac_group_t *group = (mac_group_t *)gh; 1998 1999 if (!MAC_GROUP_HW_VLAN(group)) 2000 return (0); 2001 2002 return (mac_group_addvlan(group, vid)); 2003 } 2004 2005 int 2006 mac_hwgroup_remvlan(mac_group_handle_t gh, uint16_t vid) 2007 { 2008 mac_group_t *group = (mac_group_t *)gh; 2009 2010 if (!MAC_GROUP_HW_VLAN(group)) 2011 return (0); 2012 2013 return (mac_group_remvlan(group, vid)); 2014 } 2015 2016 /* 2017 * Determine if a MAC has HW VLAN support. This is a private API 2018 * consumed by aggr. In the future it might be nice to have a bitfield 2019 * in mac_capab_rings_t to track which forms of HW filtering are 2020 * supported by the MAC. 2021 */ 2022 boolean_t 2023 mac_has_hw_vlan(mac_handle_t mh) 2024 { 2025 mac_impl_t *mip = (mac_impl_t *)mh; 2026 2027 return (MAC_GROUP_HW_VLAN(mip->mi_rx_groups)); 2028 } 2029 2030 /* 2031 * Get the number of Rx HW groups on this MAC. 2032 */ 2033 uint_t 2034 mac_get_num_rx_groups(mac_handle_t mh) 2035 { 2036 mac_impl_t *mip = (mac_impl_t *)mh; 2037 2038 ASSERT(MAC_PERIM_HELD(mh)); 2039 return (mip->mi_rx_group_count); 2040 } 2041 2042 int 2043 mac_set_promisc(mac_handle_t mh, boolean_t value) 2044 { 2045 mac_impl_t *mip = (mac_impl_t *)mh; 2046 2047 ASSERT(MAC_PERIM_HELD(mh)); 2048 return (i_mac_promisc_set(mip, value)); 2049 } 2050 2051 /* 2052 * Set the RX group to be shared/reserved. Note that the group must be 2053 * started/stopped outside of this function. 2054 */ 2055 void 2056 mac_set_group_state(mac_group_t *grp, mac_group_state_t state) 2057 { 2058 /* 2059 * If there is no change in the group state, just return. 2060 */ 2061 if (grp->mrg_state == state) 2062 return; 2063 2064 switch (state) { 2065 case MAC_GROUP_STATE_RESERVED: 2066 /* 2067 * Successfully reserved the group. 2068 * 2069 * Given that there is an exclusive client controlling this 2070 * group, we enable the group level polling when available, 2071 * so that SRSs get to turn on/off individual rings they's 2072 * assigned to. 2073 */ 2074 ASSERT(MAC_PERIM_HELD(grp->mrg_mh)); 2075 2076 if (grp->mrg_type == MAC_RING_TYPE_RX && 2077 GROUP_INTR_DISABLE_FUNC(grp) != NULL) { 2078 GROUP_INTR_DISABLE_FUNC(grp)(GROUP_INTR_HANDLE(grp)); 2079 } 2080 break; 2081 2082 case MAC_GROUP_STATE_SHARED: 2083 /* 2084 * Set all rings of this group to software classified. 2085 * If the group has an overriding interrupt, then re-enable it. 2086 */ 2087 ASSERT(MAC_PERIM_HELD(grp->mrg_mh)); 2088 2089 if (grp->mrg_type == MAC_RING_TYPE_RX && 2090 GROUP_INTR_ENABLE_FUNC(grp) != NULL) { 2091 GROUP_INTR_ENABLE_FUNC(grp)(GROUP_INTR_HANDLE(grp)); 2092 } 2093 /* The ring is not available for reservations any more */ 2094 break; 2095 2096 case MAC_GROUP_STATE_REGISTERED: 2097 /* Also callable from mac_register, perim is not held */ 2098 break; 2099 2100 default: 2101 ASSERT(B_FALSE); 2102 break; 2103 } 2104 2105 grp->mrg_state = state; 2106 } 2107 2108 /* 2109 * Quiesce future hardware classified packets for the specified Rx ring 2110 */ 2111 static void 2112 mac_rx_ring_quiesce(mac_ring_t *rx_ring, uint_t ring_flag) 2113 { 2114 ASSERT(rx_ring->mr_classify_type == MAC_HW_CLASSIFIER); 2115 ASSERT(ring_flag == MR_CONDEMNED || ring_flag == MR_QUIESCE); 2116 2117 mutex_enter(&rx_ring->mr_lock); 2118 rx_ring->mr_flag |= ring_flag; 2119 while (rx_ring->mr_refcnt != 0) 2120 cv_wait(&rx_ring->mr_cv, &rx_ring->mr_lock); 2121 mutex_exit(&rx_ring->mr_lock); 2122 } 2123 2124 /* 2125 * Please see mac_tx for details about the per cpu locking scheme 2126 */ 2127 static void 2128 mac_tx_lock_all(mac_client_impl_t *mcip) 2129 { 2130 int i; 2131 2132 for (i = 0; i <= mac_tx_percpu_cnt; i++) 2133 mutex_enter(&mcip->mci_tx_pcpu[i].pcpu_tx_lock); 2134 } 2135 2136 static void 2137 mac_tx_unlock_all(mac_client_impl_t *mcip) 2138 { 2139 int i; 2140 2141 for (i = mac_tx_percpu_cnt; i >= 0; i--) 2142 mutex_exit(&mcip->mci_tx_pcpu[i].pcpu_tx_lock); 2143 } 2144 2145 static void 2146 mac_tx_unlock_allbutzero(mac_client_impl_t *mcip) 2147 { 2148 int i; 2149 2150 for (i = mac_tx_percpu_cnt; i > 0; i--) 2151 mutex_exit(&mcip->mci_tx_pcpu[i].pcpu_tx_lock); 2152 } 2153 2154 static int 2155 mac_tx_sum_refcnt(mac_client_impl_t *mcip) 2156 { 2157 int i; 2158 int refcnt = 0; 2159 2160 for (i = 0; i <= mac_tx_percpu_cnt; i++) 2161 refcnt += mcip->mci_tx_pcpu[i].pcpu_tx_refcnt; 2162 2163 return (refcnt); 2164 } 2165 2166 /* 2167 * Stop future Tx packets coming down from the client in preparation for 2168 * quiescing the Tx side. This is needed for dynamic reclaim and reassignment 2169 * of rings between clients 2170 */ 2171 void 2172 mac_tx_client_block(mac_client_impl_t *mcip) 2173 { 2174 mac_tx_lock_all(mcip); 2175 mcip->mci_tx_flag |= MCI_TX_QUIESCE; 2176 while (mac_tx_sum_refcnt(mcip) != 0) { 2177 mac_tx_unlock_allbutzero(mcip); 2178 cv_wait(&mcip->mci_tx_cv, &mcip->mci_tx_pcpu[0].pcpu_tx_lock); 2179 mutex_exit(&mcip->mci_tx_pcpu[0].pcpu_tx_lock); 2180 mac_tx_lock_all(mcip); 2181 } 2182 mac_tx_unlock_all(mcip); 2183 } 2184 2185 void 2186 mac_tx_client_unblock(mac_client_impl_t *mcip) 2187 { 2188 mac_tx_lock_all(mcip); 2189 mcip->mci_tx_flag &= ~MCI_TX_QUIESCE; 2190 mac_tx_unlock_all(mcip); 2191 /* 2192 * We may fail to disable flow control for the last MAC_NOTE_TX 2193 * notification because the MAC client is quiesced. Send the 2194 * notification again. 2195 */ 2196 i_mac_notify(mcip->mci_mip, MAC_NOTE_TX); 2197 } 2198 2199 /* 2200 * Wait for an SRS to quiesce. The SRS worker will signal us when the 2201 * quiesce is done. 2202 */ 2203 static void 2204 mac_srs_quiesce_wait(mac_soft_ring_set_t *srs, uint_t srs_flag) 2205 { 2206 mutex_enter(&srs->srs_lock); 2207 while (!(srs->srs_state & srs_flag)) 2208 cv_wait(&srs->srs_quiesce_done_cv, &srs->srs_lock); 2209 mutex_exit(&srs->srs_lock); 2210 } 2211 2212 /* 2213 * Quiescing an Rx SRS is achieved by the following sequence. The protocol 2214 * works bottom up by cutting off packet flow from the bottommost point in the 2215 * mac, then the SRS, and then the soft rings. There are 2 use cases of this 2216 * mechanism. One is a temporary quiesce of the SRS, such as say while changing 2217 * the Rx callbacks. Another use case is Rx SRS teardown. In the former case 2218 * the QUIESCE prefix/suffix is used and in the latter the CONDEMNED is used 2219 * for the SRS and MR flags. In the former case the threads pause waiting for 2220 * a restart, while in the latter case the threads exit. The Tx SRS teardown 2221 * is also mostly similar to the above. 2222 * 2223 * 1. Stop future hardware classified packets at the lowest level in the mac. 2224 * Remove any hardware classification rule (CONDEMNED case) and mark the 2225 * rings as CONDEMNED or QUIESCE as appropriate. This prevents the mr_refcnt 2226 * from increasing. Upcalls from the driver that come through hardware 2227 * classification will be dropped in mac_rx from now on. Then we wait for 2228 * the mr_refcnt to drop to zero. When the mr_refcnt reaches zero we are 2229 * sure there aren't any upcall threads from the driver through hardware 2230 * classification. In the case of SRS teardown we also remove the 2231 * classification rule in the driver. 2232 * 2233 * 2. Stop future software classified packets by marking the flow entry with 2234 * FE_QUIESCE or FE_CONDEMNED as appropriate which prevents the refcnt from 2235 * increasing. We also remove the flow entry from the table in the latter 2236 * case. Then wait for the fe_refcnt to reach an appropriate quiescent value 2237 * that indicates there aren't any active threads using that flow entry. 2238 * 2239 * 3. Quiesce the SRS and softrings by signaling the SRS. The SRS poll thread, 2240 * SRS worker thread, and the soft ring threads are quiesced in sequence 2241 * with the SRS worker thread serving as a master controller. This 2242 * mechansim is explained in mac_srs_worker_quiesce(). 2243 * 2244 * The restart mechanism to reactivate the SRS and softrings is explained 2245 * in mac_srs_worker_restart(). Here we just signal the SRS worker to start the 2246 * restart sequence. 2247 */ 2248 void 2249 mac_rx_srs_quiesce(mac_soft_ring_set_t *srs, uint_t srs_quiesce_flag) 2250 { 2251 flow_entry_t *flent = srs->srs_flent; 2252 uint_t mr_flag, srs_done_flag; 2253 2254 ASSERT(MAC_PERIM_HELD((mac_handle_t)FLENT_TO_MIP(flent))); 2255 ASSERT(!(srs->srs_type & SRST_TX)); 2256 2257 if (srs_quiesce_flag == SRS_CONDEMNED) { 2258 mr_flag = MR_CONDEMNED; 2259 srs_done_flag = SRS_CONDEMNED_DONE; 2260 if (srs->srs_type & SRST_CLIENT_POLL_ENABLED) 2261 mac_srs_client_poll_disable(srs->srs_mcip, srs); 2262 } else { 2263 ASSERT(srs_quiesce_flag == SRS_QUIESCE); 2264 mr_flag = MR_QUIESCE; 2265 srs_done_flag = SRS_QUIESCE_DONE; 2266 if (srs->srs_type & SRST_CLIENT_POLL_ENABLED) 2267 mac_srs_client_poll_quiesce(srs->srs_mcip, srs); 2268 } 2269 2270 if (srs->srs_ring != NULL) { 2271 mac_rx_ring_quiesce(srs->srs_ring, mr_flag); 2272 } else { 2273 /* 2274 * SRS is driven by software classification. In case 2275 * of CONDEMNED, the top level teardown functions will 2276 * deal with flow removal. 2277 */ 2278 if (srs_quiesce_flag != SRS_CONDEMNED) { 2279 FLOW_MARK(flent, FE_QUIESCE); 2280 mac_flow_wait(flent, FLOW_DRIVER_UPCALL); 2281 } 2282 } 2283 2284 /* 2285 * Signal the SRS to quiesce itself, and then cv_wait for the 2286 * SRS quiesce to complete. The SRS worker thread will wake us 2287 * up when the quiesce is complete 2288 */ 2289 mac_srs_signal(srs, srs_quiesce_flag); 2290 mac_srs_quiesce_wait(srs, srs_done_flag); 2291 } 2292 2293 /* 2294 * Remove an SRS. 2295 */ 2296 void 2297 mac_rx_srs_remove(mac_soft_ring_set_t *srs) 2298 { 2299 flow_entry_t *flent = srs->srs_flent; 2300 int i; 2301 2302 mac_rx_srs_quiesce(srs, SRS_CONDEMNED); 2303 /* 2304 * Locate and remove our entry in the fe_rx_srs[] array, and 2305 * adjust the fe_rx_srs array entries and array count by 2306 * moving the last entry into the vacated spot. 2307 */ 2308 mutex_enter(&flent->fe_lock); 2309 for (i = 0; i < flent->fe_rx_srs_cnt; i++) { 2310 if (flent->fe_rx_srs[i] == srs) 2311 break; 2312 } 2313 2314 ASSERT(i != 0 && i < flent->fe_rx_srs_cnt); 2315 if (i != flent->fe_rx_srs_cnt - 1) { 2316 flent->fe_rx_srs[i] = 2317 flent->fe_rx_srs[flent->fe_rx_srs_cnt - 1]; 2318 i = flent->fe_rx_srs_cnt - 1; 2319 } 2320 2321 flent->fe_rx_srs[i] = NULL; 2322 flent->fe_rx_srs_cnt--; 2323 mutex_exit(&flent->fe_lock); 2324 2325 mac_srs_free(srs); 2326 } 2327 2328 static void 2329 mac_srs_clear_flag(mac_soft_ring_set_t *srs, uint_t flag) 2330 { 2331 mutex_enter(&srs->srs_lock); 2332 srs->srs_state &= ~flag; 2333 mutex_exit(&srs->srs_lock); 2334 } 2335 2336 void 2337 mac_rx_srs_restart(mac_soft_ring_set_t *srs) 2338 { 2339 flow_entry_t *flent = srs->srs_flent; 2340 mac_ring_t *mr; 2341 2342 ASSERT(MAC_PERIM_HELD((mac_handle_t)FLENT_TO_MIP(flent))); 2343 ASSERT((srs->srs_type & SRST_TX) == 0); 2344 2345 /* 2346 * This handles a change in the number of SRSs between the quiesce and 2347 * and restart operation of a flow. 2348 */ 2349 if (!SRS_QUIESCED(srs)) 2350 return; 2351 2352 /* 2353 * Signal the SRS to restart itself. Wait for the restart to complete 2354 * Note that we only restart the SRS if it is not marked as 2355 * permanently quiesced. 2356 */ 2357 if (!SRS_QUIESCED_PERMANENT(srs)) { 2358 mac_srs_signal(srs, SRS_RESTART); 2359 mac_srs_quiesce_wait(srs, SRS_RESTART_DONE); 2360 mac_srs_clear_flag(srs, SRS_RESTART_DONE); 2361 2362 mac_srs_client_poll_restart(srs->srs_mcip, srs); 2363 } 2364 2365 /* Finally clear the flags to let the packets in */ 2366 mr = srs->srs_ring; 2367 if (mr != NULL) { 2368 MAC_RING_UNMARK(mr, MR_QUIESCE); 2369 /* In case the ring was stopped, safely restart it */ 2370 if (mr->mr_state != MR_INUSE) 2371 (void) mac_start_ring(mr); 2372 } else { 2373 FLOW_UNMARK(flent, FE_QUIESCE); 2374 } 2375 } 2376 2377 /* 2378 * Temporary quiesce of a flow and associated Rx SRS. 2379 * Please see block comment above mac_rx_classify_flow_rem. 2380 */ 2381 /* ARGSUSED */ 2382 int 2383 mac_rx_classify_flow_quiesce(flow_entry_t *flent, void *arg) 2384 { 2385 int i; 2386 2387 for (i = 0; i < flent->fe_rx_srs_cnt; i++) { 2388 mac_rx_srs_quiesce((mac_soft_ring_set_t *)flent->fe_rx_srs[i], 2389 SRS_QUIESCE); 2390 } 2391 return (0); 2392 } 2393 2394 /* 2395 * Restart a flow and associated Rx SRS that has been quiesced temporarily 2396 * Please see block comment above mac_rx_classify_flow_rem 2397 */ 2398 /* ARGSUSED */ 2399 int 2400 mac_rx_classify_flow_restart(flow_entry_t *flent, void *arg) 2401 { 2402 int i; 2403 2404 for (i = 0; i < flent->fe_rx_srs_cnt; i++) 2405 mac_rx_srs_restart((mac_soft_ring_set_t *)flent->fe_rx_srs[i]); 2406 2407 return (0); 2408 } 2409 2410 void 2411 mac_srs_perm_quiesce(mac_client_handle_t mch, boolean_t on) 2412 { 2413 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 2414 flow_entry_t *flent = mcip->mci_flent; 2415 mac_impl_t *mip = mcip->mci_mip; 2416 mac_soft_ring_set_t *mac_srs; 2417 int i; 2418 2419 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 2420 2421 if (flent == NULL) 2422 return; 2423 2424 for (i = 0; i < flent->fe_rx_srs_cnt; i++) { 2425 mac_srs = flent->fe_rx_srs[i]; 2426 mutex_enter(&mac_srs->srs_lock); 2427 if (on) 2428 mac_srs->srs_state |= SRS_QUIESCE_PERM; 2429 else 2430 mac_srs->srs_state &= ~SRS_QUIESCE_PERM; 2431 mutex_exit(&mac_srs->srs_lock); 2432 } 2433 } 2434 2435 void 2436 mac_rx_client_quiesce(mac_client_handle_t mch) 2437 { 2438 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 2439 mac_impl_t *mip = mcip->mci_mip; 2440 2441 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 2442 2443 if (MCIP_DATAPATH_SETUP(mcip)) { 2444 (void) mac_rx_classify_flow_quiesce(mcip->mci_flent, 2445 NULL); 2446 (void) mac_flow_walk_nolock(mcip->mci_subflow_tab, 2447 mac_rx_classify_flow_quiesce, NULL); 2448 } 2449 } 2450 2451 void 2452 mac_rx_client_restart(mac_client_handle_t mch) 2453 { 2454 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 2455 mac_impl_t *mip = mcip->mci_mip; 2456 2457 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 2458 2459 if (MCIP_DATAPATH_SETUP(mcip)) { 2460 (void) mac_rx_classify_flow_restart(mcip->mci_flent, NULL); 2461 (void) mac_flow_walk_nolock(mcip->mci_subflow_tab, 2462 mac_rx_classify_flow_restart, NULL); 2463 } 2464 } 2465 2466 /* 2467 * This function only quiesces the Tx SRS and softring worker threads. Callers 2468 * need to make sure that there aren't any mac client threads doing current or 2469 * future transmits in the mac before calling this function. 2470 */ 2471 void 2472 mac_tx_srs_quiesce(mac_soft_ring_set_t *srs, uint_t srs_quiesce_flag) 2473 { 2474 mac_client_impl_t *mcip = srs->srs_mcip; 2475 2476 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip)); 2477 2478 ASSERT(srs->srs_type & SRST_TX); 2479 ASSERT(srs_quiesce_flag == SRS_CONDEMNED || 2480 srs_quiesce_flag == SRS_QUIESCE); 2481 2482 /* 2483 * Signal the SRS to quiesce itself, and then cv_wait for the 2484 * SRS quiesce to complete. The SRS worker thread will wake us 2485 * up when the quiesce is complete 2486 */ 2487 mac_srs_signal(srs, srs_quiesce_flag); 2488 mac_srs_quiesce_wait(srs, srs_quiesce_flag == SRS_QUIESCE ? 2489 SRS_QUIESCE_DONE : SRS_CONDEMNED_DONE); 2490 } 2491 2492 void 2493 mac_tx_srs_restart(mac_soft_ring_set_t *srs) 2494 { 2495 /* 2496 * Resizing the fanout could result in creation of new SRSs. 2497 * They may not necessarily be in the quiesced state in which 2498 * case it need be restarted 2499 */ 2500 if (!SRS_QUIESCED(srs)) 2501 return; 2502 2503 mac_srs_signal(srs, SRS_RESTART); 2504 mac_srs_quiesce_wait(srs, SRS_RESTART_DONE); 2505 mac_srs_clear_flag(srs, SRS_RESTART_DONE); 2506 } 2507 2508 /* 2509 * Temporary quiesce of a flow and associated Rx SRS. 2510 * Please see block comment above mac_rx_srs_quiesce 2511 */ 2512 /* ARGSUSED */ 2513 int 2514 mac_tx_flow_quiesce(flow_entry_t *flent, void *arg) 2515 { 2516 /* 2517 * The fe_tx_srs is null for a subflow on an interface that is 2518 * not plumbed 2519 */ 2520 if (flent->fe_tx_srs != NULL) 2521 mac_tx_srs_quiesce(flent->fe_tx_srs, SRS_QUIESCE); 2522 return (0); 2523 } 2524 2525 /* ARGSUSED */ 2526 int 2527 mac_tx_flow_restart(flow_entry_t *flent, void *arg) 2528 { 2529 /* 2530 * The fe_tx_srs is null for a subflow on an interface that is 2531 * not plumbed 2532 */ 2533 if (flent->fe_tx_srs != NULL) 2534 mac_tx_srs_restart(flent->fe_tx_srs); 2535 return (0); 2536 } 2537 2538 static void 2539 i_mac_tx_client_quiesce(mac_client_handle_t mch, uint_t srs_quiesce_flag) 2540 { 2541 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 2542 2543 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip)); 2544 2545 mac_tx_client_block(mcip); 2546 if (MCIP_TX_SRS(mcip) != NULL) { 2547 mac_tx_srs_quiesce(MCIP_TX_SRS(mcip), srs_quiesce_flag); 2548 (void) mac_flow_walk_nolock(mcip->mci_subflow_tab, 2549 mac_tx_flow_quiesce, NULL); 2550 } 2551 } 2552 2553 void 2554 mac_tx_client_quiesce(mac_client_handle_t mch) 2555 { 2556 i_mac_tx_client_quiesce(mch, SRS_QUIESCE); 2557 } 2558 2559 void 2560 mac_tx_client_condemn(mac_client_handle_t mch) 2561 { 2562 i_mac_tx_client_quiesce(mch, SRS_CONDEMNED); 2563 } 2564 2565 void 2566 mac_tx_client_restart(mac_client_handle_t mch) 2567 { 2568 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 2569 2570 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip)); 2571 2572 mac_tx_client_unblock(mcip); 2573 if (MCIP_TX_SRS(mcip) != NULL) { 2574 mac_tx_srs_restart(MCIP_TX_SRS(mcip)); 2575 (void) mac_flow_walk_nolock(mcip->mci_subflow_tab, 2576 mac_tx_flow_restart, NULL); 2577 } 2578 } 2579 2580 void 2581 mac_tx_client_flush(mac_client_impl_t *mcip) 2582 { 2583 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip)); 2584 2585 mac_tx_client_quiesce((mac_client_handle_t)mcip); 2586 mac_tx_client_restart((mac_client_handle_t)mcip); 2587 } 2588 2589 void 2590 mac_client_quiesce(mac_client_impl_t *mcip) 2591 { 2592 mac_rx_client_quiesce((mac_client_handle_t)mcip); 2593 mac_tx_client_quiesce((mac_client_handle_t)mcip); 2594 } 2595 2596 void 2597 mac_client_restart(mac_client_impl_t *mcip) 2598 { 2599 mac_rx_client_restart((mac_client_handle_t)mcip); 2600 mac_tx_client_restart((mac_client_handle_t)mcip); 2601 } 2602 2603 /* 2604 * Allocate a minor number. 2605 */ 2606 minor_t 2607 mac_minor_hold(boolean_t sleep) 2608 { 2609 id_t id; 2610 2611 /* 2612 * Grab a value from the arena. 2613 */ 2614 atomic_inc_32(&minor_count); 2615 2616 if (sleep) 2617 return ((uint_t)id_alloc(minor_ids)); 2618 2619 if ((id = id_alloc_nosleep(minor_ids)) == -1) { 2620 atomic_dec_32(&minor_count); 2621 return (0); 2622 } 2623 2624 return ((uint_t)id); 2625 } 2626 2627 /* 2628 * Release a previously allocated minor number. 2629 */ 2630 void 2631 mac_minor_rele(minor_t minor) 2632 { 2633 /* 2634 * Return the value to the arena. 2635 */ 2636 id_free(minor_ids, minor); 2637 atomic_dec_32(&minor_count); 2638 } 2639 2640 uint32_t 2641 mac_no_notification(mac_handle_t mh) 2642 { 2643 mac_impl_t *mip = (mac_impl_t *)mh; 2644 2645 return (((mip->mi_state_flags & MIS_LEGACY) != 0) ? 2646 mip->mi_capab_legacy.ml_unsup_note : 0); 2647 } 2648 2649 /* 2650 * Prevent any new opens of this mac in preparation for unregister 2651 */ 2652 int 2653 i_mac_disable(mac_impl_t *mip) 2654 { 2655 mac_client_impl_t *mcip; 2656 2657 rw_enter(&i_mac_impl_lock, RW_WRITER); 2658 if (mip->mi_state_flags & MIS_DISABLED) { 2659 /* Already disabled, return success */ 2660 rw_exit(&i_mac_impl_lock); 2661 return (0); 2662 } 2663 /* 2664 * See if there are any other references to this mac_t (e.g., VLAN's). 2665 * If so return failure. If all the other checks below pass, then 2666 * set mi_disabled atomically under the i_mac_impl_lock to prevent 2667 * any new VLAN's from being created or new mac client opens of this 2668 * mac end point. 2669 */ 2670 if (mip->mi_ref > 0) { 2671 rw_exit(&i_mac_impl_lock); 2672 return (EBUSY); 2673 } 2674 2675 /* 2676 * mac clients must delete all multicast groups they join before 2677 * closing. bcast groups are reference counted, the last client 2678 * to delete the group will wait till the group is physically 2679 * deleted. Since all clients have closed this mac end point 2680 * mi_bcast_ngrps must be zero at this point 2681 */ 2682 ASSERT(mip->mi_bcast_ngrps == 0); 2683 2684 /* 2685 * Don't let go of this if it has some flows. 2686 * All other code guarantees no flows are added to a disabled 2687 * mac, therefore it is sufficient to check for the flow table 2688 * only here. 2689 */ 2690 mcip = mac_primary_client_handle(mip); 2691 if ((mcip != NULL) && mac_link_has_flows((mac_client_handle_t)mcip)) { 2692 rw_exit(&i_mac_impl_lock); 2693 return (ENOTEMPTY); 2694 } 2695 2696 mip->mi_state_flags |= MIS_DISABLED; 2697 rw_exit(&i_mac_impl_lock); 2698 return (0); 2699 } 2700 2701 int 2702 mac_disable_nowait(mac_handle_t mh) 2703 { 2704 mac_impl_t *mip = (mac_impl_t *)mh; 2705 int err; 2706 2707 if ((err = i_mac_perim_enter_nowait(mip)) != 0) 2708 return (err); 2709 err = i_mac_disable(mip); 2710 i_mac_perim_exit(mip); 2711 return (err); 2712 } 2713 2714 int 2715 mac_disable(mac_handle_t mh) 2716 { 2717 mac_impl_t *mip = (mac_impl_t *)mh; 2718 int err; 2719 2720 i_mac_perim_enter(mip); 2721 err = i_mac_disable(mip); 2722 i_mac_perim_exit(mip); 2723 2724 /* 2725 * Clean up notification thread and wait for it to exit. 2726 */ 2727 if (err == 0) 2728 i_mac_notify_exit(mip); 2729 2730 return (err); 2731 } 2732 2733 /* 2734 * Called when the MAC instance has a non empty flow table, to de-multiplex 2735 * incoming packets to the right flow. 2736 */ 2737 /* ARGSUSED */ 2738 static mblk_t * 2739 mac_rx_classify(mac_impl_t *mip, mac_resource_handle_t mrh, mblk_t *mp) 2740 { 2741 flow_entry_t *flent = NULL; 2742 uint_t flags = FLOW_INBOUND; 2743 int err; 2744 2745 err = mac_flow_lookup(mip->mi_flow_tab, mp, flags, &flent); 2746 if (err != 0) { 2747 /* no registered receive function */ 2748 return (mp); 2749 } else { 2750 mac_client_impl_t *mcip; 2751 2752 /* 2753 * This flent might just be an additional one on the MAC client, 2754 * i.e. for classification purposes (different fdesc), however 2755 * the resources, SRS et. al., are in the mci_flent, so if 2756 * this isn't the mci_flent, we need to get it. 2757 */ 2758 if ((mcip = flent->fe_mcip) != NULL && 2759 mcip->mci_flent != flent) { 2760 FLOW_REFRELE(flent); 2761 flent = mcip->mci_flent; 2762 FLOW_TRY_REFHOLD(flent, err); 2763 if (err != 0) 2764 return (mp); 2765 } 2766 (flent->fe_cb_fn)(flent->fe_cb_arg1, flent->fe_cb_arg2, mp, 2767 B_FALSE); 2768 FLOW_REFRELE(flent); 2769 } 2770 return (NULL); 2771 } 2772 2773 mblk_t * 2774 mac_rx_flow(mac_handle_t mh, mac_resource_handle_t mrh, mblk_t *mp_chain) 2775 { 2776 mac_impl_t *mip = (mac_impl_t *)mh; 2777 mblk_t *bp, *bp1, **bpp, *list = NULL; 2778 2779 /* 2780 * We walk the chain and attempt to classify each packet. 2781 * The packets that couldn't be classified will be returned 2782 * back to the caller. 2783 */ 2784 bp = mp_chain; 2785 bpp = &list; 2786 while (bp != NULL) { 2787 bp1 = bp; 2788 bp = bp->b_next; 2789 bp1->b_next = NULL; 2790 2791 if (mac_rx_classify(mip, mrh, bp1) != NULL) { 2792 *bpp = bp1; 2793 bpp = &bp1->b_next; 2794 } 2795 } 2796 return (list); 2797 } 2798 2799 static int 2800 mac_tx_flow_srs_wakeup(flow_entry_t *flent, void *arg) 2801 { 2802 mac_ring_handle_t ring = arg; 2803 2804 if (flent->fe_tx_srs) 2805 mac_tx_srs_wakeup(flent->fe_tx_srs, ring); 2806 return (0); 2807 } 2808 2809 void 2810 i_mac_tx_srs_notify(mac_impl_t *mip, mac_ring_handle_t ring) 2811 { 2812 mac_client_impl_t *cclient; 2813 mac_soft_ring_set_t *mac_srs; 2814 2815 /* 2816 * After grabbing the mi_rw_lock, the list of clients can't change. 2817 * If there are any clients mi_disabled must be B_FALSE and can't 2818 * get set since there are clients. If there aren't any clients we 2819 * don't do anything. In any case the mip has to be valid. The driver 2820 * must make sure that it goes single threaded (with respect to mac 2821 * calls) and wait for all pending mac calls to finish before calling 2822 * mac_unregister. 2823 */ 2824 rw_enter(&i_mac_impl_lock, RW_READER); 2825 if (mip->mi_state_flags & MIS_DISABLED) { 2826 rw_exit(&i_mac_impl_lock); 2827 return; 2828 } 2829 2830 /* 2831 * Get MAC tx srs from walking mac_client_handle list. 2832 */ 2833 rw_enter(&mip->mi_rw_lock, RW_READER); 2834 for (cclient = mip->mi_clients_list; cclient != NULL; 2835 cclient = cclient->mci_client_next) { 2836 if ((mac_srs = MCIP_TX_SRS(cclient)) != NULL) { 2837 mac_tx_srs_wakeup(mac_srs, ring); 2838 } else { 2839 /* 2840 * Aggr opens underlying ports in exclusive mode 2841 * and registers flow control callbacks using 2842 * mac_tx_client_notify(). When opened in 2843 * exclusive mode, Tx SRS won't be created 2844 * during mac_unicast_add(). 2845 */ 2846 if (cclient->mci_state_flags & MCIS_EXCLUSIVE) { 2847 mac_tx_invoke_callbacks(cclient, 2848 (mac_tx_cookie_t)ring); 2849 } 2850 } 2851 (void) mac_flow_walk(cclient->mci_subflow_tab, 2852 mac_tx_flow_srs_wakeup, ring); 2853 } 2854 rw_exit(&mip->mi_rw_lock); 2855 rw_exit(&i_mac_impl_lock); 2856 } 2857 2858 /* ARGSUSED */ 2859 void 2860 mac_multicast_refresh(mac_handle_t mh, mac_multicst_t refresh, void *arg, 2861 boolean_t add) 2862 { 2863 mac_impl_t *mip = (mac_impl_t *)mh; 2864 2865 i_mac_perim_enter((mac_impl_t *)mh); 2866 /* 2867 * If no specific refresh function was given then default to the 2868 * driver's m_multicst entry point. 2869 */ 2870 if (refresh == NULL) { 2871 refresh = mip->mi_multicst; 2872 arg = mip->mi_driver; 2873 } 2874 2875 mac_bcast_refresh(mip, refresh, arg, add); 2876 i_mac_perim_exit((mac_impl_t *)mh); 2877 } 2878 2879 void 2880 mac_promisc_refresh(mac_handle_t mh, mac_setpromisc_t refresh, void *arg) 2881 { 2882 mac_impl_t *mip = (mac_impl_t *)mh; 2883 2884 /* 2885 * If no specific refresh function was given then default to the 2886 * driver's m_promisc entry point. 2887 */ 2888 if (refresh == NULL) { 2889 refresh = mip->mi_setpromisc; 2890 arg = mip->mi_driver; 2891 } 2892 ASSERT(refresh != NULL); 2893 2894 /* 2895 * Call the refresh function with the current promiscuity. 2896 */ 2897 refresh(arg, (mip->mi_devpromisc != 0)); 2898 } 2899 2900 /* 2901 * The mac client requests that the mac not to change its margin size to 2902 * be less than the specified value. If "current" is B_TRUE, then the client 2903 * requests the mac not to change its margin size to be smaller than the 2904 * current size. Further, return the current margin size value in this case. 2905 * 2906 * We keep every requested size in an ordered list from largest to smallest. 2907 */ 2908 int 2909 mac_margin_add(mac_handle_t mh, uint32_t *marginp, boolean_t current) 2910 { 2911 mac_impl_t *mip = (mac_impl_t *)mh; 2912 mac_margin_req_t **pp, *p; 2913 int err = 0; 2914 2915 rw_enter(&(mip->mi_rw_lock), RW_WRITER); 2916 if (current) 2917 *marginp = mip->mi_margin; 2918 2919 /* 2920 * If the current margin value cannot satisfy the margin requested, 2921 * return ENOTSUP directly. 2922 */ 2923 if (*marginp > mip->mi_margin) { 2924 err = ENOTSUP; 2925 goto done; 2926 } 2927 2928 /* 2929 * Check whether the given margin is already in the list. If so, 2930 * bump the reference count. 2931 */ 2932 for (pp = &mip->mi_mmrp; (p = *pp) != NULL; pp = &p->mmr_nextp) { 2933 if (p->mmr_margin == *marginp) { 2934 /* 2935 * The margin requested is already in the list, 2936 * so just bump the reference count. 2937 */ 2938 p->mmr_ref++; 2939 goto done; 2940 } 2941 if (p->mmr_margin < *marginp) 2942 break; 2943 } 2944 2945 2946 p = kmem_zalloc(sizeof (mac_margin_req_t), KM_SLEEP); 2947 p->mmr_margin = *marginp; 2948 p->mmr_ref++; 2949 p->mmr_nextp = *pp; 2950 *pp = p; 2951 2952 done: 2953 rw_exit(&(mip->mi_rw_lock)); 2954 return (err); 2955 } 2956 2957 /* 2958 * The mac client requests to cancel its previous mac_margin_add() request. 2959 * We remove the requested margin size from the list. 2960 */ 2961 int 2962 mac_margin_remove(mac_handle_t mh, uint32_t margin) 2963 { 2964 mac_impl_t *mip = (mac_impl_t *)mh; 2965 mac_margin_req_t **pp, *p; 2966 int err = 0; 2967 2968 rw_enter(&(mip->mi_rw_lock), RW_WRITER); 2969 /* 2970 * Find the entry in the list for the given margin. 2971 */ 2972 for (pp = &(mip->mi_mmrp); (p = *pp) != NULL; pp = &(p->mmr_nextp)) { 2973 if (p->mmr_margin == margin) { 2974 if (--p->mmr_ref == 0) 2975 break; 2976 2977 /* 2978 * There is still a reference to this address so 2979 * there's nothing more to do. 2980 */ 2981 goto done; 2982 } 2983 } 2984 2985 /* 2986 * We did not find an entry for the given margin. 2987 */ 2988 if (p == NULL) { 2989 err = ENOENT; 2990 goto done; 2991 } 2992 2993 ASSERT(p->mmr_ref == 0); 2994 2995 /* 2996 * Remove it from the list. 2997 */ 2998 *pp = p->mmr_nextp; 2999 kmem_free(p, sizeof (mac_margin_req_t)); 3000 done: 3001 rw_exit(&(mip->mi_rw_lock)); 3002 return (err); 3003 } 3004 3005 boolean_t 3006 mac_margin_update(mac_handle_t mh, uint32_t margin) 3007 { 3008 mac_impl_t *mip = (mac_impl_t *)mh; 3009 uint32_t margin_needed = 0; 3010 3011 rw_enter(&(mip->mi_rw_lock), RW_WRITER); 3012 3013 if (mip->mi_mmrp != NULL) 3014 margin_needed = mip->mi_mmrp->mmr_margin; 3015 3016 if (margin_needed <= margin) 3017 mip->mi_margin = margin; 3018 3019 rw_exit(&(mip->mi_rw_lock)); 3020 3021 if (margin_needed <= margin) 3022 i_mac_notify(mip, MAC_NOTE_MARGIN); 3023 3024 return (margin_needed <= margin); 3025 } 3026 3027 /* 3028 * MAC clients use this interface to request that a MAC device not change its 3029 * MTU below the specified amount. At this time, that amount must be within the 3030 * range of the device's current minimum and the device's current maximum. eg. a 3031 * client cannot request a 3000 byte MTU when the device's MTU is currently 3032 * 2000. 3033 * 3034 * If "current" is set to B_TRUE, then the request is to simply to reserve the 3035 * current underlying mac's maximum for this mac client and return it in mtup. 3036 */ 3037 int 3038 mac_mtu_add(mac_handle_t mh, uint32_t *mtup, boolean_t current) 3039 { 3040 mac_impl_t *mip = (mac_impl_t *)mh; 3041 mac_mtu_req_t *prev, *cur; 3042 mac_propval_range_t mpr; 3043 int err; 3044 3045 i_mac_perim_enter(mip); 3046 rw_enter(&mip->mi_rw_lock, RW_WRITER); 3047 3048 if (current == B_TRUE) 3049 *mtup = mip->mi_sdu_max; 3050 mpr.mpr_count = 1; 3051 err = mac_prop_info(mh, MAC_PROP_MTU, "mtu", NULL, 0, &mpr, NULL); 3052 if (err != 0) { 3053 rw_exit(&mip->mi_rw_lock); 3054 i_mac_perim_exit(mip); 3055 return (err); 3056 } 3057 3058 if (*mtup > mip->mi_sdu_max || 3059 *mtup < mpr.mpr_range_uint32[0].mpur_min) { 3060 rw_exit(&mip->mi_rw_lock); 3061 i_mac_perim_exit(mip); 3062 return (ENOTSUP); 3063 } 3064 3065 prev = NULL; 3066 for (cur = mip->mi_mtrp; cur != NULL; cur = cur->mtr_nextp) { 3067 if (*mtup == cur->mtr_mtu) { 3068 cur->mtr_ref++; 3069 rw_exit(&mip->mi_rw_lock); 3070 i_mac_perim_exit(mip); 3071 return (0); 3072 } 3073 3074 if (*mtup > cur->mtr_mtu) 3075 break; 3076 3077 prev = cur; 3078 } 3079 3080 cur = kmem_alloc(sizeof (mac_mtu_req_t), KM_SLEEP); 3081 cur->mtr_mtu = *mtup; 3082 cur->mtr_ref = 1; 3083 if (prev != NULL) { 3084 cur->mtr_nextp = prev->mtr_nextp; 3085 prev->mtr_nextp = cur; 3086 } else { 3087 cur->mtr_nextp = mip->mi_mtrp; 3088 mip->mi_mtrp = cur; 3089 } 3090 3091 rw_exit(&mip->mi_rw_lock); 3092 i_mac_perim_exit(mip); 3093 return (0); 3094 } 3095 3096 int 3097 mac_mtu_remove(mac_handle_t mh, uint32_t mtu) 3098 { 3099 mac_impl_t *mip = (mac_impl_t *)mh; 3100 mac_mtu_req_t *cur, *prev; 3101 3102 i_mac_perim_enter(mip); 3103 rw_enter(&mip->mi_rw_lock, RW_WRITER); 3104 3105 prev = NULL; 3106 for (cur = mip->mi_mtrp; cur != NULL; cur = cur->mtr_nextp) { 3107 if (cur->mtr_mtu == mtu) { 3108 ASSERT(cur->mtr_ref > 0); 3109 cur->mtr_ref--; 3110 if (cur->mtr_ref == 0) { 3111 if (prev == NULL) { 3112 mip->mi_mtrp = cur->mtr_nextp; 3113 } else { 3114 prev->mtr_nextp = cur->mtr_nextp; 3115 } 3116 kmem_free(cur, sizeof (mac_mtu_req_t)); 3117 } 3118 rw_exit(&mip->mi_rw_lock); 3119 i_mac_perim_exit(mip); 3120 return (0); 3121 } 3122 3123 prev = cur; 3124 } 3125 3126 rw_exit(&mip->mi_rw_lock); 3127 i_mac_perim_exit(mip); 3128 return (ENOENT); 3129 } 3130 3131 /* 3132 * MAC Type Plugin functions. 3133 */ 3134 3135 mactype_t * 3136 mactype_getplugin(const char *pname) 3137 { 3138 mactype_t *mtype = NULL; 3139 boolean_t tried_modload = B_FALSE; 3140 3141 mutex_enter(&i_mactype_lock); 3142 3143 find_registered_mactype: 3144 if (mod_hash_find(i_mactype_hash, (mod_hash_key_t)pname, 3145 (mod_hash_val_t *)&mtype) != 0) { 3146 if (!tried_modload) { 3147 /* 3148 * If the plugin has not yet been loaded, then 3149 * attempt to load it now. If modload() succeeds, 3150 * the plugin should have registered using 3151 * mactype_register(), in which case we can go back 3152 * and attempt to find it again. 3153 */ 3154 if (modload(MACTYPE_KMODDIR, (char *)pname) != -1) { 3155 tried_modload = B_TRUE; 3156 goto find_registered_mactype; 3157 } 3158 } 3159 } else { 3160 /* 3161 * Note that there's no danger that the plugin we've loaded 3162 * could be unloaded between the modload() step and the 3163 * reference count bump here, as we're holding 3164 * i_mactype_lock, which mactype_unregister() also holds. 3165 */ 3166 atomic_inc_32(&mtype->mt_ref); 3167 } 3168 3169 mutex_exit(&i_mactype_lock); 3170 return (mtype); 3171 } 3172 3173 mactype_register_t * 3174 mactype_alloc(uint_t mactype_version) 3175 { 3176 mactype_register_t *mtrp; 3177 3178 /* 3179 * Make sure there isn't a version mismatch between the plugin and 3180 * the framework. In the future, if multiple versions are 3181 * supported, this check could become more sophisticated. 3182 */ 3183 if (mactype_version != MACTYPE_VERSION) 3184 return (NULL); 3185 3186 mtrp = kmem_zalloc(sizeof (mactype_register_t), KM_SLEEP); 3187 mtrp->mtr_version = mactype_version; 3188 return (mtrp); 3189 } 3190 3191 void 3192 mactype_free(mactype_register_t *mtrp) 3193 { 3194 kmem_free(mtrp, sizeof (mactype_register_t)); 3195 } 3196 3197 int 3198 mactype_register(mactype_register_t *mtrp) 3199 { 3200 mactype_t *mtp; 3201 mactype_ops_t *ops = mtrp->mtr_ops; 3202 3203 /* Do some sanity checking before we register this MAC type. */ 3204 if (mtrp->mtr_ident == NULL || ops == NULL) 3205 return (EINVAL); 3206 3207 /* 3208 * Verify that all mandatory callbacks are set in the ops 3209 * vector. 3210 */ 3211 if (ops->mtops_unicst_verify == NULL || 3212 ops->mtops_multicst_verify == NULL || 3213 ops->mtops_sap_verify == NULL || 3214 ops->mtops_header == NULL || 3215 ops->mtops_header_info == NULL) { 3216 return (EINVAL); 3217 } 3218 3219 mtp = kmem_zalloc(sizeof (*mtp), KM_SLEEP); 3220 mtp->mt_ident = mtrp->mtr_ident; 3221 mtp->mt_ops = *ops; 3222 mtp->mt_type = mtrp->mtr_mactype; 3223 mtp->mt_nativetype = mtrp->mtr_nativetype; 3224 mtp->mt_addr_length = mtrp->mtr_addrlen; 3225 if (mtrp->mtr_brdcst_addr != NULL) { 3226 mtp->mt_brdcst_addr = kmem_alloc(mtrp->mtr_addrlen, KM_SLEEP); 3227 bcopy(mtrp->mtr_brdcst_addr, mtp->mt_brdcst_addr, 3228 mtrp->mtr_addrlen); 3229 } 3230 3231 mtp->mt_stats = mtrp->mtr_stats; 3232 mtp->mt_statcount = mtrp->mtr_statcount; 3233 3234 mtp->mt_mapping = mtrp->mtr_mapping; 3235 mtp->mt_mappingcount = mtrp->mtr_mappingcount; 3236 3237 if (mod_hash_insert(i_mactype_hash, 3238 (mod_hash_key_t)mtp->mt_ident, (mod_hash_val_t)mtp) != 0) { 3239 kmem_free(mtp->mt_brdcst_addr, mtp->mt_addr_length); 3240 kmem_free(mtp, sizeof (*mtp)); 3241 return (EEXIST); 3242 } 3243 return (0); 3244 } 3245 3246 int 3247 mactype_unregister(const char *ident) 3248 { 3249 mactype_t *mtp; 3250 mod_hash_val_t val; 3251 int err; 3252 3253 /* 3254 * Let's not allow MAC drivers to use this plugin while we're 3255 * trying to unregister it. Holding i_mactype_lock also prevents a 3256 * plugin from unregistering while a MAC driver is attempting to 3257 * hold a reference to it in i_mactype_getplugin(). 3258 */ 3259 mutex_enter(&i_mactype_lock); 3260 3261 if ((err = mod_hash_find(i_mactype_hash, (mod_hash_key_t)ident, 3262 (mod_hash_val_t *)&mtp)) != 0) { 3263 /* A plugin is trying to unregister, but it never registered. */ 3264 err = ENXIO; 3265 goto done; 3266 } 3267 3268 if (mtp->mt_ref != 0) { 3269 err = EBUSY; 3270 goto done; 3271 } 3272 3273 err = mod_hash_remove(i_mactype_hash, (mod_hash_key_t)ident, &val); 3274 ASSERT(err == 0); 3275 if (err != 0) { 3276 /* This should never happen, thus the ASSERT() above. */ 3277 err = EINVAL; 3278 goto done; 3279 } 3280 ASSERT(mtp == (mactype_t *)val); 3281 3282 if (mtp->mt_brdcst_addr != NULL) 3283 kmem_free(mtp->mt_brdcst_addr, mtp->mt_addr_length); 3284 kmem_free(mtp, sizeof (mactype_t)); 3285 done: 3286 mutex_exit(&i_mactype_lock); 3287 return (err); 3288 } 3289 3290 /* 3291 * Checks the size of the value size specified for a property as 3292 * part of a property operation. Returns B_TRUE if the size is 3293 * correct, B_FALSE otherwise. 3294 */ 3295 boolean_t 3296 mac_prop_check_size(mac_prop_id_t id, uint_t valsize, boolean_t is_range) 3297 { 3298 uint_t minsize = 0; 3299 3300 if (is_range) 3301 return (valsize >= sizeof (mac_propval_range_t)); 3302 3303 switch (id) { 3304 case MAC_PROP_ZONE: 3305 minsize = sizeof (dld_ioc_zid_t); 3306 break; 3307 case MAC_PROP_AUTOPUSH: 3308 if (valsize != 0) 3309 minsize = sizeof (struct dlautopush); 3310 break; 3311 case MAC_PROP_TAGMODE: 3312 minsize = sizeof (link_tagmode_t); 3313 break; 3314 case MAC_PROP_RESOURCE: 3315 case MAC_PROP_RESOURCE_EFF: 3316 minsize = sizeof (mac_resource_props_t); 3317 break; 3318 case MAC_PROP_DUPLEX: 3319 minsize = sizeof (link_duplex_t); 3320 break; 3321 case MAC_PROP_SPEED: 3322 minsize = sizeof (uint64_t); 3323 break; 3324 case MAC_PROP_STATUS: 3325 minsize = sizeof (link_state_t); 3326 break; 3327 case MAC_PROP_AUTONEG: 3328 case MAC_PROP_EN_AUTONEG: 3329 minsize = sizeof (uint8_t); 3330 break; 3331 case MAC_PROP_MTU: 3332 case MAC_PROP_LLIMIT: 3333 case MAC_PROP_LDECAY: 3334 minsize = sizeof (uint32_t); 3335 break; 3336 case MAC_PROP_FLOWCTRL: 3337 minsize = sizeof (link_flowctrl_t); 3338 break; 3339 case MAC_PROP_ADV_5000FDX_CAP: 3340 case MAC_PROP_EN_5000FDX_CAP: 3341 case MAC_PROP_ADV_2500FDX_CAP: 3342 case MAC_PROP_EN_2500FDX_CAP: 3343 case MAC_PROP_ADV_100GFDX_CAP: 3344 case MAC_PROP_EN_100GFDX_CAP: 3345 case MAC_PROP_ADV_50GFDX_CAP: 3346 case MAC_PROP_EN_50GFDX_CAP: 3347 case MAC_PROP_ADV_40GFDX_CAP: 3348 case MAC_PROP_EN_40GFDX_CAP: 3349 case MAC_PROP_ADV_25GFDX_CAP: 3350 case MAC_PROP_EN_25GFDX_CAP: 3351 case MAC_PROP_ADV_10GFDX_CAP: 3352 case MAC_PROP_EN_10GFDX_CAP: 3353 case MAC_PROP_ADV_1000HDX_CAP: 3354 case MAC_PROP_EN_1000HDX_CAP: 3355 case MAC_PROP_ADV_100FDX_CAP: 3356 case MAC_PROP_EN_100FDX_CAP: 3357 case MAC_PROP_ADV_100HDX_CAP: 3358 case MAC_PROP_EN_100HDX_CAP: 3359 case MAC_PROP_ADV_10FDX_CAP: 3360 case MAC_PROP_EN_10FDX_CAP: 3361 case MAC_PROP_ADV_10HDX_CAP: 3362 case MAC_PROP_EN_10HDX_CAP: 3363 case MAC_PROP_ADV_100T4_CAP: 3364 case MAC_PROP_EN_100T4_CAP: 3365 minsize = sizeof (uint8_t); 3366 break; 3367 case MAC_PROP_PVID: 3368 minsize = sizeof (uint16_t); 3369 break; 3370 case MAC_PROP_IPTUN_HOPLIMIT: 3371 minsize = sizeof (uint32_t); 3372 break; 3373 case MAC_PROP_IPTUN_ENCAPLIMIT: 3374 minsize = sizeof (uint32_t); 3375 break; 3376 case MAC_PROP_MAX_TX_RINGS_AVAIL: 3377 case MAC_PROP_MAX_RX_RINGS_AVAIL: 3378 case MAC_PROP_MAX_RXHWCLNT_AVAIL: 3379 case MAC_PROP_MAX_TXHWCLNT_AVAIL: 3380 minsize = sizeof (uint_t); 3381 break; 3382 case MAC_PROP_WL_ESSID: 3383 minsize = sizeof (wl_linkstatus_t); 3384 break; 3385 case MAC_PROP_WL_BSSID: 3386 minsize = sizeof (wl_bssid_t); 3387 break; 3388 case MAC_PROP_WL_BSSTYPE: 3389 minsize = sizeof (wl_bss_type_t); 3390 break; 3391 case MAC_PROP_WL_LINKSTATUS: 3392 minsize = sizeof (wl_linkstatus_t); 3393 break; 3394 case MAC_PROP_WL_DESIRED_RATES: 3395 minsize = sizeof (wl_rates_t); 3396 break; 3397 case MAC_PROP_WL_SUPPORTED_RATES: 3398 minsize = sizeof (wl_rates_t); 3399 break; 3400 case MAC_PROP_WL_AUTH_MODE: 3401 minsize = sizeof (wl_authmode_t); 3402 break; 3403 case MAC_PROP_WL_ENCRYPTION: 3404 minsize = sizeof (wl_encryption_t); 3405 break; 3406 case MAC_PROP_WL_RSSI: 3407 minsize = sizeof (wl_rssi_t); 3408 break; 3409 case MAC_PROP_WL_PHY_CONFIG: 3410 minsize = sizeof (wl_phy_conf_t); 3411 break; 3412 case MAC_PROP_WL_CAPABILITY: 3413 minsize = sizeof (wl_capability_t); 3414 break; 3415 case MAC_PROP_WL_WPA: 3416 minsize = sizeof (wl_wpa_t); 3417 break; 3418 case MAC_PROP_WL_SCANRESULTS: 3419 minsize = sizeof (wl_wpa_ess_t); 3420 break; 3421 case MAC_PROP_WL_POWER_MODE: 3422 minsize = sizeof (wl_ps_mode_t); 3423 break; 3424 case MAC_PROP_WL_RADIO: 3425 minsize = sizeof (wl_radio_t); 3426 break; 3427 case MAC_PROP_WL_ESS_LIST: 3428 minsize = sizeof (wl_ess_list_t); 3429 break; 3430 case MAC_PROP_WL_KEY_TAB: 3431 minsize = sizeof (wl_wep_key_tab_t); 3432 break; 3433 case MAC_PROP_WL_CREATE_IBSS: 3434 minsize = sizeof (wl_create_ibss_t); 3435 break; 3436 case MAC_PROP_WL_SETOPTIE: 3437 minsize = sizeof (wl_wpa_ie_t); 3438 break; 3439 case MAC_PROP_WL_DELKEY: 3440 minsize = sizeof (wl_del_key_t); 3441 break; 3442 case MAC_PROP_WL_KEY: 3443 minsize = sizeof (wl_key_t); 3444 break; 3445 case MAC_PROP_WL_MLME: 3446 minsize = sizeof (wl_mlme_t); 3447 break; 3448 case MAC_PROP_VN_PROMISC_FILTERED: 3449 minsize = sizeof (boolean_t); 3450 break; 3451 } 3452 3453 return (valsize >= minsize); 3454 } 3455 3456 /* 3457 * mac_set_prop() sets MAC or hardware driver properties: 3458 * 3459 * - MAC-managed properties such as resource properties include maxbw, 3460 * priority, and cpu binding list, as well as the default port VID 3461 * used by bridging. These properties are consumed by the MAC layer 3462 * itself and not passed down to the driver. For resource control 3463 * properties, this function invokes mac_set_resources() which will 3464 * cache the property value in mac_impl_t and may call 3465 * mac_client_set_resource() to update property value of the primary 3466 * mac client, if it exists. 3467 * 3468 * - Properties which act on the hardware and must be passed to the 3469 * driver, such as MTU, through the driver's mc_setprop() entry point. 3470 */ 3471 int 3472 mac_set_prop(mac_handle_t mh, mac_prop_id_t id, char *name, void *val, 3473 uint_t valsize) 3474 { 3475 int err = ENOTSUP; 3476 mac_impl_t *mip = (mac_impl_t *)mh; 3477 3478 ASSERT(MAC_PERIM_HELD(mh)); 3479 3480 switch (id) { 3481 case MAC_PROP_RESOURCE: { 3482 mac_resource_props_t *mrp; 3483 3484 /* call mac_set_resources() for MAC properties */ 3485 ASSERT(valsize >= sizeof (mac_resource_props_t)); 3486 mrp = kmem_zalloc(sizeof (*mrp), KM_SLEEP); 3487 bcopy(val, mrp, sizeof (*mrp)); 3488 err = mac_set_resources(mh, mrp); 3489 kmem_free(mrp, sizeof (*mrp)); 3490 break; 3491 } 3492 3493 case MAC_PROP_PVID: 3494 ASSERT(valsize >= sizeof (uint16_t)); 3495 if (mip->mi_state_flags & MIS_IS_VNIC) 3496 return (EINVAL); 3497 err = mac_set_pvid(mh, *(uint16_t *)val); 3498 break; 3499 3500 case MAC_PROP_MTU: { 3501 uint32_t mtu; 3502 3503 ASSERT(valsize >= sizeof (uint32_t)); 3504 bcopy(val, &mtu, sizeof (mtu)); 3505 err = mac_set_mtu(mh, mtu, NULL); 3506 break; 3507 } 3508 3509 case MAC_PROP_LLIMIT: 3510 case MAC_PROP_LDECAY: { 3511 uint32_t learnval; 3512 3513 if (valsize < sizeof (learnval) || 3514 (mip->mi_state_flags & MIS_IS_VNIC)) 3515 return (EINVAL); 3516 bcopy(val, &learnval, sizeof (learnval)); 3517 if (learnval == 0 && id == MAC_PROP_LDECAY) 3518 return (EINVAL); 3519 if (id == MAC_PROP_LLIMIT) 3520 mip->mi_llimit = learnval; 3521 else 3522 mip->mi_ldecay = learnval; 3523 err = 0; 3524 break; 3525 } 3526 3527 default: 3528 /* For other driver properties, call driver's callback */ 3529 if (mip->mi_callbacks->mc_callbacks & MC_SETPROP) { 3530 err = mip->mi_callbacks->mc_setprop(mip->mi_driver, 3531 name, id, valsize, val); 3532 } 3533 } 3534 return (err); 3535 } 3536 3537 /* 3538 * mac_get_prop() gets MAC or device driver properties. 3539 * 3540 * If the property is a driver property, mac_get_prop() calls driver's callback 3541 * entry point to get it. 3542 * If the property is a MAC property, mac_get_prop() invokes mac_get_resources() 3543 * which returns the cached value in mac_impl_t. 3544 */ 3545 int 3546 mac_get_prop(mac_handle_t mh, mac_prop_id_t id, char *name, void *val, 3547 uint_t valsize) 3548 { 3549 int err = ENOTSUP; 3550 mac_impl_t *mip = (mac_impl_t *)mh; 3551 uint_t rings; 3552 uint_t vlinks; 3553 3554 bzero(val, valsize); 3555 3556 switch (id) { 3557 case MAC_PROP_RESOURCE: { 3558 mac_resource_props_t *mrp; 3559 3560 /* If mac property, read from cache */ 3561 ASSERT(valsize >= sizeof (mac_resource_props_t)); 3562 mrp = kmem_zalloc(sizeof (*mrp), KM_SLEEP); 3563 mac_get_resources(mh, mrp); 3564 bcopy(mrp, val, sizeof (*mrp)); 3565 kmem_free(mrp, sizeof (*mrp)); 3566 return (0); 3567 } 3568 case MAC_PROP_RESOURCE_EFF: { 3569 mac_resource_props_t *mrp; 3570 3571 /* If mac effective property, read from client */ 3572 ASSERT(valsize >= sizeof (mac_resource_props_t)); 3573 mrp = kmem_zalloc(sizeof (*mrp), KM_SLEEP); 3574 mac_get_effective_resources(mh, mrp); 3575 bcopy(mrp, val, sizeof (*mrp)); 3576 kmem_free(mrp, sizeof (*mrp)); 3577 return (0); 3578 } 3579 3580 case MAC_PROP_PVID: 3581 ASSERT(valsize >= sizeof (uint16_t)); 3582 if (mip->mi_state_flags & MIS_IS_VNIC) 3583 return (EINVAL); 3584 *(uint16_t *)val = mac_get_pvid(mh); 3585 return (0); 3586 3587 case MAC_PROP_LLIMIT: 3588 case MAC_PROP_LDECAY: 3589 ASSERT(valsize >= sizeof (uint32_t)); 3590 if (mip->mi_state_flags & MIS_IS_VNIC) 3591 return (EINVAL); 3592 if (id == MAC_PROP_LLIMIT) 3593 bcopy(&mip->mi_llimit, val, sizeof (mip->mi_llimit)); 3594 else 3595 bcopy(&mip->mi_ldecay, val, sizeof (mip->mi_ldecay)); 3596 return (0); 3597 3598 case MAC_PROP_MTU: { 3599 uint32_t sdu; 3600 3601 ASSERT(valsize >= sizeof (uint32_t)); 3602 mac_sdu_get2(mh, NULL, &sdu, NULL); 3603 bcopy(&sdu, val, sizeof (sdu)); 3604 3605 return (0); 3606 } 3607 case MAC_PROP_STATUS: { 3608 link_state_t link_state; 3609 3610 if (valsize < sizeof (link_state)) 3611 return (EINVAL); 3612 link_state = mac_link_get(mh); 3613 bcopy(&link_state, val, sizeof (link_state)); 3614 3615 return (0); 3616 } 3617 3618 case MAC_PROP_MAX_RX_RINGS_AVAIL: 3619 case MAC_PROP_MAX_TX_RINGS_AVAIL: 3620 ASSERT(valsize >= sizeof (uint_t)); 3621 rings = id == MAC_PROP_MAX_RX_RINGS_AVAIL ? 3622 mac_rxavail_get(mh) : mac_txavail_get(mh); 3623 bcopy(&rings, val, sizeof (uint_t)); 3624 return (0); 3625 3626 case MAC_PROP_MAX_RXHWCLNT_AVAIL: 3627 case MAC_PROP_MAX_TXHWCLNT_AVAIL: 3628 ASSERT(valsize >= sizeof (uint_t)); 3629 vlinks = id == MAC_PROP_MAX_RXHWCLNT_AVAIL ? 3630 mac_rxhwlnksavail_get(mh) : mac_txhwlnksavail_get(mh); 3631 bcopy(&vlinks, val, sizeof (uint_t)); 3632 return (0); 3633 3634 case MAC_PROP_RXRINGSRANGE: 3635 case MAC_PROP_TXRINGSRANGE: 3636 /* 3637 * The value for these properties are returned through 3638 * the MAC_PROP_RESOURCE property. 3639 */ 3640 return (0); 3641 3642 default: 3643 break; 3644 3645 } 3646 3647 /* If driver property, request from driver */ 3648 if (mip->mi_callbacks->mc_callbacks & MC_GETPROP) { 3649 err = mip->mi_callbacks->mc_getprop(mip->mi_driver, name, id, 3650 valsize, val); 3651 } 3652 3653 return (err); 3654 } 3655 3656 /* 3657 * Helper function to initialize the range structure for use in 3658 * mac_get_prop. If the type can be other than uint32, we can 3659 * pass that as an arg. 3660 */ 3661 static void 3662 _mac_set_range(mac_propval_range_t *range, uint32_t min, uint32_t max) 3663 { 3664 range->mpr_count = 1; 3665 range->mpr_type = MAC_PROPVAL_UINT32; 3666 range->mpr_range_uint32[0].mpur_min = min; 3667 range->mpr_range_uint32[0].mpur_max = max; 3668 } 3669 3670 /* 3671 * Returns information about the specified property, such as default 3672 * values or permissions. 3673 */ 3674 int 3675 mac_prop_info(mac_handle_t mh, mac_prop_id_t id, char *name, 3676 void *default_val, uint_t default_size, mac_propval_range_t *range, 3677 uint_t *perm) 3678 { 3679 mac_prop_info_state_t state; 3680 mac_impl_t *mip = (mac_impl_t *)mh; 3681 uint_t max; 3682 3683 /* 3684 * A property is read/write by default unless the driver says 3685 * otherwise. 3686 */ 3687 if (perm != NULL) 3688 *perm = MAC_PROP_PERM_RW; 3689 3690 if (default_val != NULL) 3691 bzero(default_val, default_size); 3692 3693 /* 3694 * First, handle framework properties for which we don't need to 3695 * involve the driver. 3696 */ 3697 switch (id) { 3698 case MAC_PROP_RESOURCE: 3699 case MAC_PROP_PVID: 3700 case MAC_PROP_LLIMIT: 3701 case MAC_PROP_LDECAY: 3702 return (0); 3703 3704 case MAC_PROP_MAX_RX_RINGS_AVAIL: 3705 case MAC_PROP_MAX_TX_RINGS_AVAIL: 3706 case MAC_PROP_MAX_RXHWCLNT_AVAIL: 3707 case MAC_PROP_MAX_TXHWCLNT_AVAIL: 3708 if (perm != NULL) 3709 *perm = MAC_PROP_PERM_READ; 3710 return (0); 3711 3712 case MAC_PROP_RXRINGSRANGE: 3713 case MAC_PROP_TXRINGSRANGE: 3714 /* 3715 * Currently, we support range for RX and TX rings properties. 3716 * When we extend this support to maxbw, cpus and priority, 3717 * we should move this to mac_get_resources. 3718 * There is no default value for RX or TX rings. 3719 */ 3720 if ((mip->mi_state_flags & MIS_IS_VNIC) && 3721 mac_is_vnic_primary(mh)) { 3722 /* 3723 * We don't support setting rings for a VLAN 3724 * data link because it shares its ring with the 3725 * primary MAC client. 3726 */ 3727 if (perm != NULL) 3728 *perm = MAC_PROP_PERM_READ; 3729 if (range != NULL) 3730 range->mpr_count = 0; 3731 } else if (range != NULL) { 3732 if (mip->mi_state_flags & MIS_IS_VNIC) 3733 mh = mac_get_lower_mac_handle(mh); 3734 mip = (mac_impl_t *)mh; 3735 if ((id == MAC_PROP_RXRINGSRANGE && 3736 mip->mi_rx_group_type == MAC_GROUP_TYPE_STATIC) || 3737 (id == MAC_PROP_TXRINGSRANGE && 3738 mip->mi_tx_group_type == MAC_GROUP_TYPE_STATIC)) { 3739 if (id == MAC_PROP_RXRINGSRANGE) { 3740 if ((mac_rxhwlnksavail_get(mh) + 3741 mac_rxhwlnksrsvd_get(mh)) <= 1) { 3742 /* 3743 * doesn't support groups or 3744 * rings 3745 */ 3746 range->mpr_count = 0; 3747 } else { 3748 /* 3749 * supports specifying groups, 3750 * but not rings 3751 */ 3752 _mac_set_range(range, 0, 0); 3753 } 3754 } else { 3755 if ((mac_txhwlnksavail_get(mh) + 3756 mac_txhwlnksrsvd_get(mh)) <= 1) { 3757 /* 3758 * doesn't support groups or 3759 * rings 3760 */ 3761 range->mpr_count = 0; 3762 } else { 3763 /* 3764 * supports specifying groups, 3765 * but not rings 3766 */ 3767 _mac_set_range(range, 0, 0); 3768 } 3769 } 3770 } else { 3771 max = id == MAC_PROP_RXRINGSRANGE ? 3772 mac_rxavail_get(mh) + mac_rxrsvd_get(mh) : 3773 mac_txavail_get(mh) + mac_txrsvd_get(mh); 3774 if (max <= 1) { 3775 /* 3776 * doesn't support groups or 3777 * rings 3778 */ 3779 range->mpr_count = 0; 3780 } else { 3781 /* 3782 * -1 because we have to leave out the 3783 * default ring. 3784 */ 3785 _mac_set_range(range, 1, max - 1); 3786 } 3787 } 3788 } 3789 return (0); 3790 3791 case MAC_PROP_STATUS: 3792 if (perm != NULL) 3793 *perm = MAC_PROP_PERM_READ; 3794 return (0); 3795 } 3796 3797 /* 3798 * Get the property info from the driver if it implements the 3799 * property info entry point. 3800 */ 3801 bzero(&state, sizeof (state)); 3802 3803 if (mip->mi_callbacks->mc_callbacks & MC_PROPINFO) { 3804 state.pr_default = default_val; 3805 state.pr_default_size = default_size; 3806 3807 /* 3808 * The caller specifies the maximum number of ranges 3809 * it can accomodate using mpr_count. We don't touch 3810 * this value until the driver returns from its 3811 * mc_propinfo() callback, and ensure we don't exceed 3812 * this number of range as the driver defines 3813 * supported range from its mc_propinfo(). 3814 * 3815 * pr_range_cur_count keeps track of how many ranges 3816 * were defined by the driver from its mc_propinfo() 3817 * entry point. 3818 * 3819 * On exit, the user-specified range mpr_count returns 3820 * the number of ranges specified by the driver on 3821 * success, or the number of ranges it wanted to 3822 * define if that number of ranges could not be 3823 * accomodated by the specified range structure. In 3824 * the latter case, the caller will be able to 3825 * allocate a larger range structure, and query the 3826 * property again. 3827 */ 3828 state.pr_range_cur_count = 0; 3829 state.pr_range = range; 3830 3831 mip->mi_callbacks->mc_propinfo(mip->mi_driver, name, id, 3832 (mac_prop_info_handle_t)&state); 3833 3834 if (state.pr_flags & MAC_PROP_INFO_RANGE) 3835 range->mpr_count = state.pr_range_cur_count; 3836 3837 /* 3838 * The operation could fail if the buffer supplied by 3839 * the user was too small for the range or default 3840 * value of the property. 3841 */ 3842 if (state.pr_errno != 0) 3843 return (state.pr_errno); 3844 3845 if (perm != NULL && state.pr_flags & MAC_PROP_INFO_PERM) 3846 *perm = state.pr_perm; 3847 } 3848 3849 /* 3850 * The MAC layer may want to provide default values or allowed 3851 * ranges for properties if the driver does not provide a 3852 * property info entry point, or that entry point exists, but 3853 * it did not provide a default value or allowed ranges for 3854 * that property. 3855 */ 3856 switch (id) { 3857 case MAC_PROP_MTU: { 3858 uint32_t sdu; 3859 3860 mac_sdu_get2(mh, NULL, &sdu, NULL); 3861 3862 if (range != NULL && !(state.pr_flags & 3863 MAC_PROP_INFO_RANGE)) { 3864 /* MTU range */ 3865 _mac_set_range(range, sdu, sdu); 3866 } 3867 3868 if (default_val != NULL && !(state.pr_flags & 3869 MAC_PROP_INFO_DEFAULT)) { 3870 if (mip->mi_info.mi_media == DL_ETHER) 3871 sdu = ETHERMTU; 3872 /* default MTU value */ 3873 bcopy(&sdu, default_val, sizeof (sdu)); 3874 } 3875 } 3876 } 3877 3878 return (0); 3879 } 3880 3881 int 3882 mac_fastpath_disable(mac_handle_t mh) 3883 { 3884 mac_impl_t *mip = (mac_impl_t *)mh; 3885 3886 if ((mip->mi_state_flags & MIS_LEGACY) == 0) 3887 return (0); 3888 3889 return (mip->mi_capab_legacy.ml_fastpath_disable(mip->mi_driver)); 3890 } 3891 3892 void 3893 mac_fastpath_enable(mac_handle_t mh) 3894 { 3895 mac_impl_t *mip = (mac_impl_t *)mh; 3896 3897 if ((mip->mi_state_flags & MIS_LEGACY) == 0) 3898 return; 3899 3900 mip->mi_capab_legacy.ml_fastpath_enable(mip->mi_driver); 3901 } 3902 3903 void 3904 mac_register_priv_prop(mac_impl_t *mip, char **priv_props) 3905 { 3906 uint_t nprops, i; 3907 3908 if (priv_props == NULL) 3909 return; 3910 3911 nprops = 0; 3912 while (priv_props[nprops] != NULL) 3913 nprops++; 3914 if (nprops == 0) 3915 return; 3916 3917 3918 mip->mi_priv_prop = kmem_zalloc(nprops * sizeof (char *), KM_SLEEP); 3919 3920 for (i = 0; i < nprops; i++) { 3921 mip->mi_priv_prop[i] = kmem_zalloc(MAXLINKPROPNAME, KM_SLEEP); 3922 (void) strlcpy(mip->mi_priv_prop[i], priv_props[i], 3923 MAXLINKPROPNAME); 3924 } 3925 3926 mip->mi_priv_prop_count = nprops; 3927 } 3928 3929 void 3930 mac_unregister_priv_prop(mac_impl_t *mip) 3931 { 3932 uint_t i; 3933 3934 if (mip->mi_priv_prop_count == 0) { 3935 ASSERT(mip->mi_priv_prop == NULL); 3936 return; 3937 } 3938 3939 for (i = 0; i < mip->mi_priv_prop_count; i++) 3940 kmem_free(mip->mi_priv_prop[i], MAXLINKPROPNAME); 3941 kmem_free(mip->mi_priv_prop, mip->mi_priv_prop_count * 3942 sizeof (char *)); 3943 3944 mip->mi_priv_prop = NULL; 3945 mip->mi_priv_prop_count = 0; 3946 } 3947 3948 /* 3949 * mac_ring_t 'mr' macros. Some rogue drivers may access ring structure 3950 * (by invoking mac_rx()) even after processing mac_stop_ring(). In such 3951 * cases if MAC free's the ring structure after mac_stop_ring(), any 3952 * illegal access to the ring structure coming from the driver will panic 3953 * the system. In order to protect the system from such inadverent access, 3954 * we maintain a cache of rings in the mac_impl_t after they get free'd up. 3955 * When packets are received on free'd up rings, MAC (through the generation 3956 * count mechanism) will drop such packets. 3957 */ 3958 static mac_ring_t * 3959 mac_ring_alloc(mac_impl_t *mip) 3960 { 3961 mac_ring_t *ring; 3962 3963 mutex_enter(&mip->mi_ring_lock); 3964 if (mip->mi_ring_freelist != NULL) { 3965 ring = mip->mi_ring_freelist; 3966 mip->mi_ring_freelist = ring->mr_next; 3967 bzero(ring, sizeof (mac_ring_t)); 3968 mutex_exit(&mip->mi_ring_lock); 3969 } else { 3970 mutex_exit(&mip->mi_ring_lock); 3971 ring = kmem_cache_alloc(mac_ring_cache, KM_SLEEP); 3972 } 3973 ASSERT((ring != NULL) && (ring->mr_state == MR_FREE)); 3974 return (ring); 3975 } 3976 3977 static void 3978 mac_ring_free(mac_impl_t *mip, mac_ring_t *ring) 3979 { 3980 ASSERT(ring->mr_state == MR_FREE); 3981 3982 mutex_enter(&mip->mi_ring_lock); 3983 ring->mr_state = MR_FREE; 3984 ring->mr_flag = 0; 3985 ring->mr_next = mip->mi_ring_freelist; 3986 ring->mr_mip = NULL; 3987 mip->mi_ring_freelist = ring; 3988 mac_ring_stat_delete(ring); 3989 mutex_exit(&mip->mi_ring_lock); 3990 } 3991 3992 static void 3993 mac_ring_freeall(mac_impl_t *mip) 3994 { 3995 mac_ring_t *ring_next; 3996 mutex_enter(&mip->mi_ring_lock); 3997 mac_ring_t *ring = mip->mi_ring_freelist; 3998 while (ring != NULL) { 3999 ring_next = ring->mr_next; 4000 kmem_cache_free(mac_ring_cache, ring); 4001 ring = ring_next; 4002 } 4003 mip->mi_ring_freelist = NULL; 4004 mutex_exit(&mip->mi_ring_lock); 4005 } 4006 4007 int 4008 mac_start_ring(mac_ring_t *ring) 4009 { 4010 int rv = 0; 4011 4012 ASSERT(ring->mr_state == MR_FREE); 4013 4014 if (ring->mr_start != NULL) { 4015 rv = ring->mr_start(ring->mr_driver, ring->mr_gen_num); 4016 if (rv != 0) 4017 return (rv); 4018 } 4019 4020 ring->mr_state = MR_INUSE; 4021 return (rv); 4022 } 4023 4024 void 4025 mac_stop_ring(mac_ring_t *ring) 4026 { 4027 ASSERT(ring->mr_state == MR_INUSE); 4028 4029 if (ring->mr_stop != NULL) 4030 ring->mr_stop(ring->mr_driver); 4031 4032 ring->mr_state = MR_FREE; 4033 4034 /* 4035 * Increment the ring generation number for this ring. 4036 */ 4037 ring->mr_gen_num++; 4038 } 4039 4040 int 4041 mac_start_group(mac_group_t *group) 4042 { 4043 int rv = 0; 4044 4045 if (group->mrg_start != NULL) 4046 rv = group->mrg_start(group->mrg_driver); 4047 4048 return (rv); 4049 } 4050 4051 void 4052 mac_stop_group(mac_group_t *group) 4053 { 4054 if (group->mrg_stop != NULL) 4055 group->mrg_stop(group->mrg_driver); 4056 } 4057 4058 /* 4059 * Called from mac_start() on the default Rx group. Broadcast and multicast 4060 * packets are received only on the default group. Hence the default group 4061 * needs to be up even if the primary client is not up, for the other groups 4062 * to be functional. We do this by calling this function at mac_start time 4063 * itself. However the broadcast packets that are received can't make their 4064 * way beyond mac_rx until a mac client creates a broadcast flow. 4065 */ 4066 static int 4067 mac_start_group_and_rings(mac_group_t *group) 4068 { 4069 mac_ring_t *ring; 4070 int rv = 0; 4071 4072 ASSERT(group->mrg_state == MAC_GROUP_STATE_REGISTERED); 4073 if ((rv = mac_start_group(group)) != 0) 4074 return (rv); 4075 4076 for (ring = group->mrg_rings; ring != NULL; ring = ring->mr_next) { 4077 ASSERT(ring->mr_state == MR_FREE); 4078 4079 if ((rv = mac_start_ring(ring)) != 0) 4080 goto error; 4081 4082 /* 4083 * When aggr_set_port_sdu() is called, it will remove 4084 * the port client's unicast address. This will cause 4085 * MAC to stop the default group's rings on the port 4086 * MAC. After it modifies the SDU, it will then re-add 4087 * the unicast address. At which time, this function is 4088 * called to start the default group's rings. Normally 4089 * this function would set the classify type to 4090 * MAC_SW_CLASSIFIER; but that will break aggr which 4091 * relies on the passthru classify mode being set for 4092 * correct delivery (see mac_rx_common()). To avoid 4093 * that, we check for a passthru callback and set the 4094 * classify type to MAC_PASSTHRU_CLASSIFIER; as it was 4095 * before the rings were stopped. 4096 */ 4097 ring->mr_classify_type = (ring->mr_pt_fn != NULL) ? 4098 MAC_PASSTHRU_CLASSIFIER : MAC_SW_CLASSIFIER; 4099 } 4100 return (0); 4101 4102 error: 4103 mac_stop_group_and_rings(group); 4104 return (rv); 4105 } 4106 4107 /* Called from mac_stop on the default Rx group */ 4108 static void 4109 mac_stop_group_and_rings(mac_group_t *group) 4110 { 4111 mac_ring_t *ring; 4112 4113 for (ring = group->mrg_rings; ring != NULL; ring = ring->mr_next) { 4114 if (ring->mr_state != MR_FREE) { 4115 mac_stop_ring(ring); 4116 ring->mr_flag = 0; 4117 ring->mr_classify_type = MAC_NO_CLASSIFIER; 4118 } 4119 } 4120 mac_stop_group(group); 4121 } 4122 4123 4124 static mac_ring_t * 4125 mac_init_ring(mac_impl_t *mip, mac_group_t *group, int index, 4126 mac_capab_rings_t *cap_rings) 4127 { 4128 mac_ring_t *ring, *rnext; 4129 mac_ring_info_t ring_info; 4130 ddi_intr_handle_t ddi_handle; 4131 4132 ring = mac_ring_alloc(mip); 4133 4134 /* Prepare basic information of ring */ 4135 4136 /* 4137 * Ring index is numbered to be unique across a particular device. 4138 * Ring index computation makes following assumptions: 4139 * - For drivers with static grouping (e.g. ixgbe, bge), 4140 * ring index exchanged with the driver (e.g. during mr_rget) 4141 * is unique only across the group the ring belongs to. 4142 * - Drivers with dynamic grouping (e.g. nxge), start 4143 * with single group (mrg_index = 0). 4144 */ 4145 ring->mr_index = group->mrg_index * group->mrg_info.mgi_count + index; 4146 ring->mr_type = group->mrg_type; 4147 ring->mr_gh = (mac_group_handle_t)group; 4148 4149 /* Insert the new ring to the list. */ 4150 ring->mr_next = group->mrg_rings; 4151 group->mrg_rings = ring; 4152 4153 /* Zero to reuse the info data structure */ 4154 bzero(&ring_info, sizeof (ring_info)); 4155 4156 /* Query ring information from driver */ 4157 cap_rings->mr_rget(mip->mi_driver, group->mrg_type, group->mrg_index, 4158 index, &ring_info, (mac_ring_handle_t)ring); 4159 4160 ring->mr_info = ring_info; 4161 4162 /* 4163 * The interrupt handle could be shared among multiple rings. 4164 * Thus if there is a bunch of rings that are sharing an 4165 * interrupt, then only one ring among the bunch will be made 4166 * available for interrupt re-targeting; the rest will have 4167 * ddi_shared flag set to TRUE and would not be available for 4168 * be interrupt re-targeting. 4169 */ 4170 if ((ddi_handle = ring_info.mri_intr.mi_ddi_handle) != NULL) { 4171 rnext = ring->mr_next; 4172 while (rnext != NULL) { 4173 if (rnext->mr_info.mri_intr.mi_ddi_handle == 4174 ddi_handle) { 4175 /* 4176 * If default ring (mr_index == 0) is part 4177 * of a group of rings sharing an 4178 * interrupt, then set ddi_shared flag for 4179 * the default ring and give another ring 4180 * the chance to be re-targeted. 4181 */ 4182 if (rnext->mr_index == 0 && 4183 !rnext->mr_info.mri_intr.mi_ddi_shared) { 4184 rnext->mr_info.mri_intr.mi_ddi_shared = 4185 B_TRUE; 4186 } else { 4187 ring->mr_info.mri_intr.mi_ddi_shared = 4188 B_TRUE; 4189 } 4190 break; 4191 } 4192 rnext = rnext->mr_next; 4193 } 4194 /* 4195 * If rnext is NULL, then no matching ddi_handle was found. 4196 * Rx rings get registered first. So if this is a Tx ring, 4197 * then go through all the Rx rings and see if there is a 4198 * matching ddi handle. 4199 */ 4200 if (rnext == NULL && ring->mr_type == MAC_RING_TYPE_TX) { 4201 mac_compare_ddi_handle(mip->mi_rx_groups, 4202 mip->mi_rx_group_count, ring); 4203 } 4204 } 4205 4206 /* Update ring's status */ 4207 ring->mr_state = MR_FREE; 4208 ring->mr_flag = 0; 4209 4210 /* Update the ring count of the group */ 4211 group->mrg_cur_count++; 4212 4213 /* Create per ring kstats */ 4214 if (ring->mr_stat != NULL) { 4215 ring->mr_mip = mip; 4216 mac_ring_stat_create(ring); 4217 } 4218 4219 return (ring); 4220 } 4221 4222 /* 4223 * Rings are chained together for easy regrouping. 4224 */ 4225 static void 4226 mac_init_group(mac_impl_t *mip, mac_group_t *group, int size, 4227 mac_capab_rings_t *cap_rings) 4228 { 4229 int index; 4230 4231 /* 4232 * Initialize all ring members of this group. Size of zero will not 4233 * enter the loop, so it's safe for initializing an empty group. 4234 */ 4235 for (index = size - 1; index >= 0; index--) 4236 (void) mac_init_ring(mip, group, index, cap_rings); 4237 } 4238 4239 int 4240 mac_init_rings(mac_impl_t *mip, mac_ring_type_t rtype) 4241 { 4242 mac_capab_rings_t *cap_rings; 4243 mac_group_t *group; 4244 mac_group_t *groups; 4245 mac_group_info_t group_info; 4246 uint_t group_free = 0; 4247 uint_t ring_left; 4248 mac_ring_t *ring; 4249 int g; 4250 int err = 0; 4251 uint_t grpcnt; 4252 boolean_t pseudo_txgrp = B_FALSE; 4253 4254 switch (rtype) { 4255 case MAC_RING_TYPE_RX: 4256 ASSERT(mip->mi_rx_groups == NULL); 4257 4258 cap_rings = &mip->mi_rx_rings_cap; 4259 cap_rings->mr_type = MAC_RING_TYPE_RX; 4260 break; 4261 case MAC_RING_TYPE_TX: 4262 ASSERT(mip->mi_tx_groups == NULL); 4263 4264 cap_rings = &mip->mi_tx_rings_cap; 4265 cap_rings->mr_type = MAC_RING_TYPE_TX; 4266 break; 4267 default: 4268 ASSERT(B_FALSE); 4269 } 4270 4271 if (!i_mac_capab_get((mac_handle_t)mip, MAC_CAPAB_RINGS, cap_rings)) 4272 return (0); 4273 grpcnt = cap_rings->mr_gnum; 4274 4275 /* 4276 * If we have multiple TX rings, but only one TX group, we can 4277 * create pseudo TX groups (one per TX ring) in the MAC layer, 4278 * except for an aggr. For an aggr currently we maintain only 4279 * one group with all the rings (for all its ports), going 4280 * forwards we might change this. 4281 */ 4282 if (rtype == MAC_RING_TYPE_TX && 4283 cap_rings->mr_gnum == 0 && cap_rings->mr_rnum > 0 && 4284 (mip->mi_state_flags & MIS_IS_AGGR) == 0) { 4285 /* 4286 * The -1 here is because we create a default TX group 4287 * with all the rings in it. 4288 */ 4289 grpcnt = cap_rings->mr_rnum - 1; 4290 pseudo_txgrp = B_TRUE; 4291 } 4292 4293 /* 4294 * Allocate a contiguous buffer for all groups. 4295 */ 4296 groups = kmem_zalloc(sizeof (mac_group_t) * (grpcnt+ 1), KM_SLEEP); 4297 4298 ring_left = cap_rings->mr_rnum; 4299 4300 /* 4301 * Get all ring groups if any, and get their ring members 4302 * if any. 4303 */ 4304 for (g = 0; g < grpcnt; g++) { 4305 group = groups + g; 4306 4307 /* Prepare basic information of the group */ 4308 group->mrg_index = g; 4309 group->mrg_type = rtype; 4310 group->mrg_state = MAC_GROUP_STATE_UNINIT; 4311 group->mrg_mh = (mac_handle_t)mip; 4312 group->mrg_next = group + 1; 4313 4314 /* Zero to reuse the info data structure */ 4315 bzero(&group_info, sizeof (group_info)); 4316 4317 if (pseudo_txgrp) { 4318 /* 4319 * This is a pseudo group that we created, apart 4320 * from setting the state there is nothing to be 4321 * done. 4322 */ 4323 group->mrg_state = MAC_GROUP_STATE_REGISTERED; 4324 group_free++; 4325 continue; 4326 } 4327 /* Query group information from driver */ 4328 cap_rings->mr_gget(mip->mi_driver, rtype, g, &group_info, 4329 (mac_group_handle_t)group); 4330 4331 switch (cap_rings->mr_group_type) { 4332 case MAC_GROUP_TYPE_DYNAMIC: 4333 if (cap_rings->mr_gaddring == NULL || 4334 cap_rings->mr_gremring == NULL) { 4335 DTRACE_PROBE3( 4336 mac__init__rings_no_addremring, 4337 char *, mip->mi_name, 4338 mac_group_add_ring_t, 4339 cap_rings->mr_gaddring, 4340 mac_group_add_ring_t, 4341 cap_rings->mr_gremring); 4342 err = EINVAL; 4343 goto bail; 4344 } 4345 4346 switch (rtype) { 4347 case MAC_RING_TYPE_RX: 4348 /* 4349 * The first RX group must have non-zero 4350 * rings, and the following groups must 4351 * have zero rings. 4352 */ 4353 if (g == 0 && group_info.mgi_count == 0) { 4354 DTRACE_PROBE1( 4355 mac__init__rings__rx__def__zero, 4356 char *, mip->mi_name); 4357 err = EINVAL; 4358 goto bail; 4359 } 4360 if (g > 0 && group_info.mgi_count != 0) { 4361 DTRACE_PROBE3( 4362 mac__init__rings__rx__nonzero, 4363 char *, mip->mi_name, 4364 int, g, int, group_info.mgi_count); 4365 err = EINVAL; 4366 goto bail; 4367 } 4368 break; 4369 case MAC_RING_TYPE_TX: 4370 /* 4371 * All TX ring groups must have zero rings. 4372 */ 4373 if (group_info.mgi_count != 0) { 4374 DTRACE_PROBE3( 4375 mac__init__rings__tx__nonzero, 4376 char *, mip->mi_name, 4377 int, g, int, group_info.mgi_count); 4378 err = EINVAL; 4379 goto bail; 4380 } 4381 break; 4382 } 4383 break; 4384 case MAC_GROUP_TYPE_STATIC: 4385 /* 4386 * Note that an empty group is allowed, e.g., an aggr 4387 * would start with an empty group. 4388 */ 4389 break; 4390 default: 4391 /* unknown group type */ 4392 DTRACE_PROBE2(mac__init__rings__unknown__type, 4393 char *, mip->mi_name, 4394 int, cap_rings->mr_group_type); 4395 err = EINVAL; 4396 goto bail; 4397 } 4398 4399 4400 /* 4401 * The driver must register some form of hardware MAC 4402 * filter in order for Rx groups to support multiple 4403 * MAC addresses. 4404 */ 4405 if (rtype == MAC_RING_TYPE_RX && 4406 (group_info.mgi_addmac == NULL || 4407 group_info.mgi_remmac == NULL)) { 4408 DTRACE_PROBE1(mac__init__rings__no__mac__filter, 4409 char *, mip->mi_name); 4410 err = EINVAL; 4411 goto bail; 4412 } 4413 4414 /* Cache driver-supplied information */ 4415 group->mrg_info = group_info; 4416 4417 /* Update the group's status and group count. */ 4418 mac_set_group_state(group, MAC_GROUP_STATE_REGISTERED); 4419 group_free++; 4420 4421 group->mrg_rings = NULL; 4422 group->mrg_cur_count = 0; 4423 mac_init_group(mip, group, group_info.mgi_count, cap_rings); 4424 ring_left -= group_info.mgi_count; 4425 4426 /* The current group size should be equal to default value */ 4427 ASSERT(group->mrg_cur_count == group_info.mgi_count); 4428 } 4429 4430 /* Build up a dummy group for free resources as a pool */ 4431 group = groups + grpcnt; 4432 4433 /* Prepare basic information of the group */ 4434 group->mrg_index = -1; 4435 group->mrg_type = rtype; 4436 group->mrg_state = MAC_GROUP_STATE_UNINIT; 4437 group->mrg_mh = (mac_handle_t)mip; 4438 group->mrg_next = NULL; 4439 4440 /* 4441 * If there are ungrouped rings, allocate a continuous buffer for 4442 * remaining resources. 4443 */ 4444 if (ring_left != 0) { 4445 group->mrg_rings = NULL; 4446 group->mrg_cur_count = 0; 4447 mac_init_group(mip, group, ring_left, cap_rings); 4448 4449 /* The current group size should be equal to ring_left */ 4450 ASSERT(group->mrg_cur_count == ring_left); 4451 4452 ring_left = 0; 4453 4454 /* Update this group's status */ 4455 mac_set_group_state(group, MAC_GROUP_STATE_REGISTERED); 4456 } else { 4457 group->mrg_rings = NULL; 4458 } 4459 4460 ASSERT(ring_left == 0); 4461 4462 bail: 4463 4464 /* Cache other important information to finalize the initialization */ 4465 switch (rtype) { 4466 case MAC_RING_TYPE_RX: 4467 mip->mi_rx_group_type = cap_rings->mr_group_type; 4468 mip->mi_rx_group_count = cap_rings->mr_gnum; 4469 mip->mi_rx_groups = groups; 4470 mip->mi_rx_donor_grp = groups; 4471 if (mip->mi_rx_group_type == MAC_GROUP_TYPE_DYNAMIC) { 4472 /* 4473 * The default ring is reserved since it is 4474 * used for sending the broadcast etc. packets. 4475 */ 4476 mip->mi_rxrings_avail = 4477 mip->mi_rx_groups->mrg_cur_count - 1; 4478 mip->mi_rxrings_rsvd = 1; 4479 } 4480 /* 4481 * The default group cannot be reserved. It is used by 4482 * all the clients that do not have an exclusive group. 4483 */ 4484 mip->mi_rxhwclnt_avail = mip->mi_rx_group_count - 1; 4485 mip->mi_rxhwclnt_used = 1; 4486 break; 4487 case MAC_RING_TYPE_TX: 4488 mip->mi_tx_group_type = pseudo_txgrp ? MAC_GROUP_TYPE_DYNAMIC : 4489 cap_rings->mr_group_type; 4490 mip->mi_tx_group_count = grpcnt; 4491 mip->mi_tx_group_free = group_free; 4492 mip->mi_tx_groups = groups; 4493 4494 group = groups + grpcnt; 4495 ring = group->mrg_rings; 4496 /* 4497 * The ring can be NULL in the case of aggr. Aggr will 4498 * have an empty Tx group which will get populated 4499 * later when pseudo Tx rings are added after 4500 * mac_register() is done. 4501 */ 4502 if (ring == NULL) { 4503 ASSERT(mip->mi_state_flags & MIS_IS_AGGR); 4504 /* 4505 * pass the group to aggr so it can add Tx 4506 * rings to the group later. 4507 */ 4508 cap_rings->mr_gget(mip->mi_driver, rtype, 0, NULL, 4509 (mac_group_handle_t)group); 4510 /* 4511 * Even though there are no rings at this time 4512 * (rings will come later), set the group 4513 * state to registered. 4514 */ 4515 group->mrg_state = MAC_GROUP_STATE_REGISTERED; 4516 } else { 4517 /* 4518 * Ring 0 is used as the default one and it could be 4519 * assigned to a client as well. 4520 */ 4521 while ((ring->mr_index != 0) && (ring->mr_next != NULL)) 4522 ring = ring->mr_next; 4523 ASSERT(ring->mr_index == 0); 4524 mip->mi_default_tx_ring = (mac_ring_handle_t)ring; 4525 } 4526 if (mip->mi_tx_group_type == MAC_GROUP_TYPE_DYNAMIC) { 4527 mip->mi_txrings_avail = group->mrg_cur_count - 1; 4528 /* 4529 * The default ring cannot be reserved. 4530 */ 4531 mip->mi_txrings_rsvd = 1; 4532 } 4533 /* 4534 * The default group cannot be reserved. It will be shared 4535 * by clients that do not have an exclusive group. 4536 */ 4537 mip->mi_txhwclnt_avail = mip->mi_tx_group_count; 4538 mip->mi_txhwclnt_used = 1; 4539 break; 4540 default: 4541 ASSERT(B_FALSE); 4542 } 4543 4544 if (err != 0) 4545 mac_free_rings(mip, rtype); 4546 4547 return (err); 4548 } 4549 4550 /* 4551 * The ddi interrupt handle could be shared amoung rings. If so, compare 4552 * the new ring's ddi handle with the existing ones and set ddi_shared 4553 * flag. 4554 */ 4555 void 4556 mac_compare_ddi_handle(mac_group_t *groups, uint_t grpcnt, mac_ring_t *cring) 4557 { 4558 mac_group_t *group; 4559 mac_ring_t *ring; 4560 ddi_intr_handle_t ddi_handle; 4561 int g; 4562 4563 ddi_handle = cring->mr_info.mri_intr.mi_ddi_handle; 4564 for (g = 0; g < grpcnt; g++) { 4565 group = groups + g; 4566 for (ring = group->mrg_rings; ring != NULL; 4567 ring = ring->mr_next) { 4568 if (ring == cring) 4569 continue; 4570 if (ring->mr_info.mri_intr.mi_ddi_handle == 4571 ddi_handle) { 4572 if (cring->mr_type == MAC_RING_TYPE_RX && 4573 ring->mr_index == 0 && 4574 !ring->mr_info.mri_intr.mi_ddi_shared) { 4575 ring->mr_info.mri_intr.mi_ddi_shared = 4576 B_TRUE; 4577 } else { 4578 cring->mr_info.mri_intr.mi_ddi_shared = 4579 B_TRUE; 4580 } 4581 return; 4582 } 4583 } 4584 } 4585 } 4586 4587 /* 4588 * Called to free all groups of particular type (RX or TX). It's assumed that 4589 * no clients are using these groups. 4590 */ 4591 void 4592 mac_free_rings(mac_impl_t *mip, mac_ring_type_t rtype) 4593 { 4594 mac_group_t *group, *groups; 4595 uint_t group_count; 4596 4597 switch (rtype) { 4598 case MAC_RING_TYPE_RX: 4599 if (mip->mi_rx_groups == NULL) 4600 return; 4601 4602 groups = mip->mi_rx_groups; 4603 group_count = mip->mi_rx_group_count; 4604 4605 mip->mi_rx_groups = NULL; 4606 mip->mi_rx_donor_grp = NULL; 4607 mip->mi_rx_group_count = 0; 4608 break; 4609 case MAC_RING_TYPE_TX: 4610 ASSERT(mip->mi_tx_group_count == mip->mi_tx_group_free); 4611 4612 if (mip->mi_tx_groups == NULL) 4613 return; 4614 4615 groups = mip->mi_tx_groups; 4616 group_count = mip->mi_tx_group_count; 4617 4618 mip->mi_tx_groups = NULL; 4619 mip->mi_tx_group_count = 0; 4620 mip->mi_tx_group_free = 0; 4621 mip->mi_default_tx_ring = NULL; 4622 break; 4623 default: 4624 ASSERT(B_FALSE); 4625 } 4626 4627 for (group = groups; group != NULL; group = group->mrg_next) { 4628 mac_ring_t *ring; 4629 4630 if (group->mrg_cur_count == 0) 4631 continue; 4632 4633 ASSERT(group->mrg_rings != NULL); 4634 4635 while ((ring = group->mrg_rings) != NULL) { 4636 group->mrg_rings = ring->mr_next; 4637 mac_ring_free(mip, ring); 4638 } 4639 } 4640 4641 /* Free all the cached rings */ 4642 mac_ring_freeall(mip); 4643 /* Free the block of group data strutures */ 4644 kmem_free(groups, sizeof (mac_group_t) * (group_count + 1)); 4645 } 4646 4647 /* 4648 * Associate the VLAN filter to the receive group. 4649 */ 4650 int 4651 mac_group_addvlan(mac_group_t *group, uint16_t vlan) 4652 { 4653 VERIFY3S(group->mrg_type, ==, MAC_RING_TYPE_RX); 4654 VERIFY3P(group->mrg_info.mgi_addvlan, !=, NULL); 4655 4656 if (vlan > VLAN_ID_MAX) 4657 return (EINVAL); 4658 4659 vlan = MAC_VLAN_UNTAGGED_VID(vlan); 4660 return (group->mrg_info.mgi_addvlan(group->mrg_info.mgi_driver, vlan)); 4661 } 4662 4663 /* 4664 * Dissociate the VLAN from the receive group. 4665 */ 4666 int 4667 mac_group_remvlan(mac_group_t *group, uint16_t vlan) 4668 { 4669 VERIFY3S(group->mrg_type, ==, MAC_RING_TYPE_RX); 4670 VERIFY3P(group->mrg_info.mgi_remvlan, !=, NULL); 4671 4672 if (vlan > VLAN_ID_MAX) 4673 return (EINVAL); 4674 4675 vlan = MAC_VLAN_UNTAGGED_VID(vlan); 4676 return (group->mrg_info.mgi_remvlan(group->mrg_info.mgi_driver, vlan)); 4677 } 4678 4679 /* 4680 * Associate a MAC address with a receive group. 4681 * 4682 * The return value of this function should always be checked properly, because 4683 * any type of failure could cause unexpected results. A group can be added 4684 * or removed with a MAC address only after it has been reserved. Ideally, 4685 * a successful reservation always leads to calling mac_group_addmac() to 4686 * steer desired traffic. Failure of adding an unicast MAC address doesn't 4687 * always imply that the group is functioning abnormally. 4688 * 4689 * Currently this function is called everywhere, and it reflects assumptions 4690 * about MAC addresses in the implementation. CR 6735196. 4691 */ 4692 int 4693 mac_group_addmac(mac_group_t *group, const uint8_t *addr) 4694 { 4695 VERIFY3S(group->mrg_type, ==, MAC_RING_TYPE_RX); 4696 VERIFY3P(group->mrg_info.mgi_addmac, !=, NULL); 4697 4698 return (group->mrg_info.mgi_addmac(group->mrg_info.mgi_driver, addr)); 4699 } 4700 4701 /* 4702 * Remove the association between MAC address and receive group. 4703 */ 4704 int 4705 mac_group_remmac(mac_group_t *group, const uint8_t *addr) 4706 { 4707 VERIFY3S(group->mrg_type, ==, MAC_RING_TYPE_RX); 4708 VERIFY3P(group->mrg_info.mgi_remmac, !=, NULL); 4709 4710 return (group->mrg_info.mgi_remmac(group->mrg_info.mgi_driver, addr)); 4711 } 4712 4713 /* 4714 * This is the entry point for packets transmitted through the bridge 4715 * code. If no bridge is in place, mac_ring_tx() transmits via the tx 4716 * ring. The 'rh' pointer may be NULL to select the default ring. 4717 */ 4718 mblk_t * 4719 mac_bridge_tx(mac_impl_t *mip, mac_ring_handle_t rh, mblk_t *mp) 4720 { 4721 mac_handle_t mh; 4722 4723 /* 4724 * Once we take a reference on the bridge link, the bridge 4725 * module itself can't unload, so the callback pointers are 4726 * stable. 4727 */ 4728 mutex_enter(&mip->mi_bridge_lock); 4729 if ((mh = mip->mi_bridge_link) != NULL) 4730 mac_bridge_ref_cb(mh, B_TRUE); 4731 mutex_exit(&mip->mi_bridge_lock); 4732 if (mh == NULL) { 4733 mp = mac_ring_tx((mac_handle_t)mip, rh, mp); 4734 } else { 4735 /* 4736 * The bridge may place this mblk on a provider's Tx 4737 * path, a mac's Rx path, or both. Since we don't have 4738 * enough information at this point, we can't be sure 4739 * that the destination(s) are capable of handling the 4740 * hardware offloads requested by the mblk. We emulate 4741 * them here as it is the safest choice. In the 4742 * future, if bridge performance becomes a priority, 4743 * we can elide the emulation here and leave the 4744 * choice up to bridge. 4745 * 4746 * We don't clear the DB_CKSUMFLAGS here because 4747 * HCK_IPV4_HDRCKSUM (Tx) and HCK_IPV4_HDRCKSUM_OK 4748 * (Rx) still have the same value. If the bridge 4749 * receives a packet from a HCKSUM_IPHDRCKSUM NIC then 4750 * the mac(s) it is forwarded on may calculate the 4751 * checksum again, but incorrectly (because the 4752 * checksum field is not zero). Until the 4753 * HCK_IPV4_HDRCKSUM/HCK_IPV4_HDRCKSUM_OK issue is 4754 * resovled, we leave the flag clearing in bridge 4755 * itself. 4756 */ 4757 if ((DB_CKSUMFLAGS(mp) & (HCK_TX_FLAGS | HW_LSO_FLAGS)) != 0) { 4758 mac_hw_emul(&mp, NULL, NULL, MAC_ALL_EMULS); 4759 } 4760 4761 mp = mac_bridge_tx_cb(mh, rh, mp); 4762 mac_bridge_ref_cb(mh, B_FALSE); 4763 } 4764 4765 return (mp); 4766 } 4767 4768 /* 4769 * Find a ring from its index. 4770 */ 4771 mac_ring_handle_t 4772 mac_find_ring(mac_group_handle_t gh, int index) 4773 { 4774 mac_group_t *group = (mac_group_t *)gh; 4775 mac_ring_t *ring = group->mrg_rings; 4776 4777 for (ring = group->mrg_rings; ring != NULL; ring = ring->mr_next) 4778 if (ring->mr_index == index) 4779 break; 4780 4781 return ((mac_ring_handle_t)ring); 4782 } 4783 /* 4784 * Add a ring to an existing group. 4785 * 4786 * The ring must be either passed directly (for example if the ring 4787 * movement is initiated by the framework), or specified through a driver 4788 * index (for example when the ring is added by the driver. 4789 * 4790 * The caller needs to call mac_perim_enter() before calling this function. 4791 */ 4792 int 4793 i_mac_group_add_ring(mac_group_t *group, mac_ring_t *ring, int index) 4794 { 4795 mac_impl_t *mip = (mac_impl_t *)group->mrg_mh; 4796 mac_capab_rings_t *cap_rings; 4797 boolean_t driver_call = (ring == NULL); 4798 mac_group_type_t group_type; 4799 int ret = 0; 4800 flow_entry_t *flent; 4801 4802 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 4803 4804 switch (group->mrg_type) { 4805 case MAC_RING_TYPE_RX: 4806 cap_rings = &mip->mi_rx_rings_cap; 4807 group_type = mip->mi_rx_group_type; 4808 break; 4809 case MAC_RING_TYPE_TX: 4810 cap_rings = &mip->mi_tx_rings_cap; 4811 group_type = mip->mi_tx_group_type; 4812 break; 4813 default: 4814 ASSERT(B_FALSE); 4815 } 4816 4817 /* 4818 * There should be no ring with the same ring index in the target 4819 * group. 4820 */ 4821 ASSERT(mac_find_ring((mac_group_handle_t)group, 4822 driver_call ? index : ring->mr_index) == NULL); 4823 4824 if (driver_call) { 4825 /* 4826 * The function is called as a result of a request from 4827 * a driver to add a ring to an existing group, for example 4828 * from the aggregation driver. Allocate a new mac_ring_t 4829 * for that ring. 4830 */ 4831 ring = mac_init_ring(mip, group, index, cap_rings); 4832 ASSERT(group->mrg_state > MAC_GROUP_STATE_UNINIT); 4833 } else { 4834 /* 4835 * The function is called as a result of a MAC layer request 4836 * to add a ring to an existing group. In this case the 4837 * ring is being moved between groups, which requires 4838 * the underlying driver to support dynamic grouping, 4839 * and the mac_ring_t already exists. 4840 */ 4841 ASSERT(group_type == MAC_GROUP_TYPE_DYNAMIC); 4842 ASSERT(group->mrg_driver == NULL || 4843 cap_rings->mr_gaddring != NULL); 4844 ASSERT(ring->mr_gh == NULL); 4845 } 4846 4847 /* 4848 * At this point the ring should not be in use, and it should be 4849 * of the right for the target group. 4850 */ 4851 ASSERT(ring->mr_state < MR_INUSE); 4852 ASSERT(ring->mr_srs == NULL); 4853 ASSERT(ring->mr_type == group->mrg_type); 4854 4855 if (!driver_call) { 4856 /* 4857 * Add the driver level hardware ring if the process was not 4858 * initiated by the driver, and the target group is not the 4859 * group. 4860 */ 4861 if (group->mrg_driver != NULL) { 4862 cap_rings->mr_gaddring(group->mrg_driver, 4863 ring->mr_driver, ring->mr_type); 4864 } 4865 4866 /* 4867 * Insert the ring ahead existing rings. 4868 */ 4869 ring->mr_next = group->mrg_rings; 4870 group->mrg_rings = ring; 4871 ring->mr_gh = (mac_group_handle_t)group; 4872 group->mrg_cur_count++; 4873 } 4874 4875 /* 4876 * If the group has not been actively used, we're done. 4877 */ 4878 if (group->mrg_index != -1 && 4879 group->mrg_state < MAC_GROUP_STATE_RESERVED) 4880 return (0); 4881 4882 /* 4883 * Start the ring if needed. Failure causes to undo the grouping action. 4884 */ 4885 if (ring->mr_state != MR_INUSE) { 4886 if ((ret = mac_start_ring(ring)) != 0) { 4887 if (!driver_call) { 4888 cap_rings->mr_gremring(group->mrg_driver, 4889 ring->mr_driver, ring->mr_type); 4890 } 4891 group->mrg_cur_count--; 4892 group->mrg_rings = ring->mr_next; 4893 4894 ring->mr_gh = NULL; 4895 4896 if (driver_call) 4897 mac_ring_free(mip, ring); 4898 4899 return (ret); 4900 } 4901 } 4902 4903 /* 4904 * Set up SRS/SR according to the ring type. 4905 */ 4906 switch (ring->mr_type) { 4907 case MAC_RING_TYPE_RX: 4908 /* 4909 * Setup an SRS on top of the new ring if the group is 4910 * reserved for someone's exclusive use. 4911 */ 4912 if (group->mrg_state == MAC_GROUP_STATE_RESERVED) { 4913 mac_client_impl_t *mcip = MAC_GROUP_ONLY_CLIENT(group); 4914 4915 VERIFY3P(mcip, !=, NULL); 4916 flent = mcip->mci_flent; 4917 VERIFY3S(flent->fe_rx_srs_cnt, >, 0); 4918 mac_rx_srs_group_setup(mcip, flent, SRST_LINK); 4919 mac_fanout_setup(mcip, flent, MCIP_RESOURCE_PROPS(mcip), 4920 mac_rx_deliver, mcip, NULL, NULL); 4921 } else { 4922 ring->mr_classify_type = MAC_SW_CLASSIFIER; 4923 } 4924 break; 4925 case MAC_RING_TYPE_TX: 4926 { 4927 mac_grp_client_t *mgcp = group->mrg_clients; 4928 mac_client_impl_t *mcip; 4929 mac_soft_ring_set_t *mac_srs; 4930 mac_srs_tx_t *tx; 4931 4932 if (MAC_GROUP_NO_CLIENT(group)) { 4933 if (ring->mr_state == MR_INUSE) 4934 mac_stop_ring(ring); 4935 ring->mr_flag = 0; 4936 break; 4937 } 4938 /* 4939 * If the rings are being moved to a group that has 4940 * clients using it, then add the new rings to the 4941 * clients SRS. 4942 */ 4943 while (mgcp != NULL) { 4944 boolean_t is_aggr; 4945 4946 mcip = mgcp->mgc_client; 4947 flent = mcip->mci_flent; 4948 is_aggr = (mcip->mci_state_flags & MCIS_IS_AGGR_CLIENT); 4949 mac_srs = MCIP_TX_SRS(mcip); 4950 tx = &mac_srs->srs_tx; 4951 mac_tx_client_quiesce((mac_client_handle_t)mcip); 4952 /* 4953 * If we are growing from 1 to multiple rings. 4954 */ 4955 if (tx->st_mode == SRS_TX_BW || 4956 tx->st_mode == SRS_TX_SERIALIZE || 4957 tx->st_mode == SRS_TX_DEFAULT) { 4958 mac_ring_t *tx_ring = tx->st_arg2; 4959 4960 tx->st_arg2 = NULL; 4961 mac_tx_srs_stat_recreate(mac_srs, B_TRUE); 4962 mac_tx_srs_add_ring(mac_srs, tx_ring); 4963 if (mac_srs->srs_type & SRST_BW_CONTROL) { 4964 tx->st_mode = is_aggr ? SRS_TX_BW_AGGR : 4965 SRS_TX_BW_FANOUT; 4966 } else { 4967 tx->st_mode = is_aggr ? SRS_TX_AGGR : 4968 SRS_TX_FANOUT; 4969 } 4970 tx->st_func = mac_tx_get_func(tx->st_mode); 4971 } 4972 mac_tx_srs_add_ring(mac_srs, ring); 4973 mac_fanout_setup(mcip, flent, MCIP_RESOURCE_PROPS(mcip), 4974 mac_rx_deliver, mcip, NULL, NULL); 4975 mac_tx_client_restart((mac_client_handle_t)mcip); 4976 mgcp = mgcp->mgc_next; 4977 } 4978 break; 4979 } 4980 default: 4981 ASSERT(B_FALSE); 4982 } 4983 /* 4984 * For aggr, the default ring will be NULL to begin with. If it 4985 * is NULL, then pick the first ring that gets added as the 4986 * default ring. Any ring in an aggregation can be removed at 4987 * any time (by the user action of removing a link) and if the 4988 * current default ring gets removed, then a new one gets 4989 * picked (see i_mac_group_rem_ring()). 4990 */ 4991 if (mip->mi_state_flags & MIS_IS_AGGR && 4992 mip->mi_default_tx_ring == NULL && 4993 ring->mr_type == MAC_RING_TYPE_TX) { 4994 mip->mi_default_tx_ring = (mac_ring_handle_t)ring; 4995 } 4996 4997 MAC_RING_UNMARK(ring, MR_INCIPIENT); 4998 return (0); 4999 } 5000 5001 /* 5002 * Remove a ring from it's current group. MAC internal function for dynamic 5003 * grouping. 5004 * 5005 * The caller needs to call mac_perim_enter() before calling this function. 5006 */ 5007 void 5008 i_mac_group_rem_ring(mac_group_t *group, mac_ring_t *ring, 5009 boolean_t driver_call) 5010 { 5011 mac_impl_t *mip = (mac_impl_t *)group->mrg_mh; 5012 mac_capab_rings_t *cap_rings = NULL; 5013 mac_group_type_t group_type; 5014 5015 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 5016 5017 ASSERT(mac_find_ring((mac_group_handle_t)group, 5018 ring->mr_index) == (mac_ring_handle_t)ring); 5019 ASSERT((mac_group_t *)ring->mr_gh == group); 5020 ASSERT(ring->mr_type == group->mrg_type); 5021 5022 if (ring->mr_state == MR_INUSE) 5023 mac_stop_ring(ring); 5024 switch (ring->mr_type) { 5025 case MAC_RING_TYPE_RX: 5026 group_type = mip->mi_rx_group_type; 5027 cap_rings = &mip->mi_rx_rings_cap; 5028 5029 /* 5030 * Only hardware classified packets hold a reference to the 5031 * ring all the way up the Rx path. mac_rx_srs_remove() 5032 * will take care of quiescing the Rx path and removing the 5033 * SRS. The software classified path neither holds a reference 5034 * nor any association with the ring in mac_rx. 5035 */ 5036 if (ring->mr_srs != NULL) { 5037 mac_rx_srs_remove(ring->mr_srs); 5038 ring->mr_srs = NULL; 5039 } 5040 5041 break; 5042 case MAC_RING_TYPE_TX: 5043 { 5044 mac_grp_client_t *mgcp; 5045 mac_client_impl_t *mcip; 5046 mac_soft_ring_set_t *mac_srs; 5047 mac_srs_tx_t *tx; 5048 mac_ring_t *rem_ring; 5049 mac_group_t *defgrp; 5050 uint_t ring_info = 0; 5051 5052 /* 5053 * For TX this function is invoked in three 5054 * cases: 5055 * 5056 * 1) In the case of a failure during the 5057 * initial creation of a group when a share is 5058 * associated with a MAC client. So the SRS is not 5059 * yet setup, and will be setup later after the 5060 * group has been reserved and populated. 5061 * 5062 * 2) From mac_release_tx_group() when freeing 5063 * a TX SRS. 5064 * 5065 * 3) In the case of aggr, when a port gets removed, 5066 * the pseudo Tx rings that it exposed gets removed. 5067 * 5068 * In the first two cases the SRS and its soft 5069 * rings are already quiesced. 5070 */ 5071 if (driver_call) { 5072 mac_client_impl_t *mcip; 5073 mac_soft_ring_set_t *mac_srs; 5074 mac_soft_ring_t *sringp; 5075 mac_srs_tx_t *srs_tx; 5076 5077 if (mip->mi_state_flags & MIS_IS_AGGR && 5078 mip->mi_default_tx_ring == 5079 (mac_ring_handle_t)ring) { 5080 /* pick a new default Tx ring */ 5081 mip->mi_default_tx_ring = 5082 (group->mrg_rings != ring) ? 5083 (mac_ring_handle_t)group->mrg_rings : 5084 (mac_ring_handle_t)(ring->mr_next); 5085 } 5086 /* Presently only aggr case comes here */ 5087 if (group->mrg_state != MAC_GROUP_STATE_RESERVED) 5088 break; 5089 5090 mcip = MAC_GROUP_ONLY_CLIENT(group); 5091 ASSERT(mcip != NULL); 5092 ASSERT(mcip->mci_state_flags & MCIS_IS_AGGR_CLIENT); 5093 mac_srs = MCIP_TX_SRS(mcip); 5094 ASSERT(mac_srs->srs_tx.st_mode == SRS_TX_AGGR || 5095 mac_srs->srs_tx.st_mode == SRS_TX_BW_AGGR); 5096 srs_tx = &mac_srs->srs_tx; 5097 /* 5098 * Wakeup any callers blocked on this 5099 * Tx ring due to flow control. 5100 */ 5101 sringp = srs_tx->st_soft_rings[ring->mr_index]; 5102 ASSERT(sringp != NULL); 5103 mac_tx_invoke_callbacks(mcip, (mac_tx_cookie_t)sringp); 5104 mac_tx_client_quiesce((mac_client_handle_t)mcip); 5105 mac_tx_srs_del_ring(mac_srs, ring); 5106 mac_tx_client_restart((mac_client_handle_t)mcip); 5107 break; 5108 } 5109 ASSERT(ring != (mac_ring_t *)mip->mi_default_tx_ring); 5110 group_type = mip->mi_tx_group_type; 5111 cap_rings = &mip->mi_tx_rings_cap; 5112 /* 5113 * See if we need to take it out of the MAC clients using 5114 * this group 5115 */ 5116 if (MAC_GROUP_NO_CLIENT(group)) 5117 break; 5118 mgcp = group->mrg_clients; 5119 defgrp = MAC_DEFAULT_TX_GROUP(mip); 5120 while (mgcp != NULL) { 5121 mcip = mgcp->mgc_client; 5122 mac_srs = MCIP_TX_SRS(mcip); 5123 tx = &mac_srs->srs_tx; 5124 mac_tx_client_quiesce((mac_client_handle_t)mcip); 5125 /* 5126 * If we are here when removing rings from the 5127 * defgroup, mac_reserve_tx_ring would have 5128 * already deleted the ring from the MAC 5129 * clients in the group. 5130 */ 5131 if (group != defgrp) { 5132 mac_tx_invoke_callbacks(mcip, 5133 (mac_tx_cookie_t) 5134 mac_tx_srs_get_soft_ring(mac_srs, ring)); 5135 mac_tx_srs_del_ring(mac_srs, ring); 5136 } 5137 /* 5138 * Additionally, if we are left with only 5139 * one ring in the group after this, we need 5140 * to modify the mode etc. to. (We haven't 5141 * yet taken the ring out, so we check with 2). 5142 */ 5143 if (group->mrg_cur_count == 2) { 5144 if (ring->mr_next == NULL) 5145 rem_ring = group->mrg_rings; 5146 else 5147 rem_ring = ring->mr_next; 5148 mac_tx_invoke_callbacks(mcip, 5149 (mac_tx_cookie_t) 5150 mac_tx_srs_get_soft_ring(mac_srs, 5151 rem_ring)); 5152 mac_tx_srs_del_ring(mac_srs, rem_ring); 5153 if (rem_ring->mr_state != MR_INUSE) { 5154 (void) mac_start_ring(rem_ring); 5155 } 5156 tx->st_arg2 = (void *)rem_ring; 5157 mac_tx_srs_stat_recreate(mac_srs, B_FALSE); 5158 ring_info = mac_hwring_getinfo( 5159 (mac_ring_handle_t)rem_ring); 5160 /* 5161 * We are shrinking from multiple 5162 * to 1 ring. 5163 */ 5164 if (mac_srs->srs_type & SRST_BW_CONTROL) { 5165 tx->st_mode = SRS_TX_BW; 5166 } else if (mac_tx_serialize || 5167 (ring_info & MAC_RING_TX_SERIALIZE)) { 5168 tx->st_mode = SRS_TX_SERIALIZE; 5169 } else { 5170 tx->st_mode = SRS_TX_DEFAULT; 5171 } 5172 tx->st_func = mac_tx_get_func(tx->st_mode); 5173 } 5174 mac_tx_client_restart((mac_client_handle_t)mcip); 5175 mgcp = mgcp->mgc_next; 5176 } 5177 break; 5178 } 5179 default: 5180 ASSERT(B_FALSE); 5181 } 5182 5183 /* 5184 * Remove the ring from the group. 5185 */ 5186 if (ring == group->mrg_rings) 5187 group->mrg_rings = ring->mr_next; 5188 else { 5189 mac_ring_t *pre; 5190 5191 pre = group->mrg_rings; 5192 while (pre->mr_next != ring) 5193 pre = pre->mr_next; 5194 pre->mr_next = ring->mr_next; 5195 } 5196 group->mrg_cur_count--; 5197 5198 if (!driver_call) { 5199 ASSERT(group_type == MAC_GROUP_TYPE_DYNAMIC); 5200 ASSERT(group->mrg_driver == NULL || 5201 cap_rings->mr_gremring != NULL); 5202 5203 /* 5204 * Remove the driver level hardware ring. 5205 */ 5206 if (group->mrg_driver != NULL) { 5207 cap_rings->mr_gremring(group->mrg_driver, 5208 ring->mr_driver, ring->mr_type); 5209 } 5210 } 5211 5212 ring->mr_gh = NULL; 5213 if (driver_call) 5214 mac_ring_free(mip, ring); 5215 else 5216 ring->mr_flag = 0; 5217 } 5218 5219 /* 5220 * Move a ring to the target group. If needed, remove the ring from the group 5221 * that it currently belongs to. 5222 * 5223 * The caller need to enter MAC's perimeter by calling mac_perim_enter(). 5224 */ 5225 static int 5226 mac_group_mov_ring(mac_impl_t *mip, mac_group_t *d_group, mac_ring_t *ring) 5227 { 5228 mac_group_t *s_group = (mac_group_t *)ring->mr_gh; 5229 int rv; 5230 5231 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 5232 ASSERT(d_group != NULL); 5233 ASSERT(s_group == NULL || s_group->mrg_mh == d_group->mrg_mh); 5234 5235 if (s_group == d_group) 5236 return (0); 5237 5238 /* 5239 * Remove it from current group first. 5240 */ 5241 if (s_group != NULL) 5242 i_mac_group_rem_ring(s_group, ring, B_FALSE); 5243 5244 /* 5245 * Add it to the new group. 5246 */ 5247 rv = i_mac_group_add_ring(d_group, ring, 0); 5248 if (rv != 0) { 5249 /* 5250 * Failed to add ring back to source group. If 5251 * that fails, the ring is stuck in limbo, log message. 5252 */ 5253 if (i_mac_group_add_ring(s_group, ring, 0)) { 5254 cmn_err(CE_WARN, "%s: failed to move ring %p\n", 5255 mip->mi_name, (void *)ring); 5256 } 5257 } 5258 5259 return (rv); 5260 } 5261 5262 /* 5263 * Find a MAC address according to its value. 5264 */ 5265 mac_address_t * 5266 mac_find_macaddr(mac_impl_t *mip, uint8_t *mac_addr) 5267 { 5268 mac_address_t *map; 5269 5270 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 5271 5272 for (map = mip->mi_addresses; map != NULL; map = map->ma_next) { 5273 if (bcmp(mac_addr, map->ma_addr, map->ma_len) == 0) 5274 break; 5275 } 5276 5277 return (map); 5278 } 5279 5280 /* 5281 * Check whether the MAC address is shared by multiple clients. 5282 */ 5283 boolean_t 5284 mac_check_macaddr_shared(mac_address_t *map) 5285 { 5286 ASSERT(MAC_PERIM_HELD((mac_handle_t)map->ma_mip)); 5287 5288 return (map->ma_nusers > 1); 5289 } 5290 5291 /* 5292 * Remove the specified MAC address from the MAC address list and free it. 5293 */ 5294 static void 5295 mac_free_macaddr(mac_address_t *map) 5296 { 5297 mac_impl_t *mip = map->ma_mip; 5298 5299 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 5300 VERIFY3P(mip->mi_addresses, !=, NULL); 5301 5302 VERIFY3P(map, ==, mac_find_macaddr(mip, map->ma_addr)); 5303 VERIFY3P(map, !=, NULL); 5304 VERIFY3S(map->ma_nusers, ==, 0); 5305 VERIFY3P(map->ma_vlans, ==, NULL); 5306 5307 if (map == mip->mi_addresses) { 5308 mip->mi_addresses = map->ma_next; 5309 } else { 5310 mac_address_t *pre; 5311 5312 pre = mip->mi_addresses; 5313 while (pre->ma_next != map) 5314 pre = pre->ma_next; 5315 pre->ma_next = map->ma_next; 5316 } 5317 5318 kmem_free(map, sizeof (mac_address_t)); 5319 } 5320 5321 static mac_vlan_t * 5322 mac_find_vlan(mac_address_t *map, uint16_t vid) 5323 { 5324 mac_vlan_t *mvp; 5325 5326 for (mvp = map->ma_vlans; mvp != NULL; mvp = mvp->mv_next) { 5327 if (mvp->mv_vid == vid) 5328 return (mvp); 5329 } 5330 5331 return (NULL); 5332 } 5333 5334 static mac_vlan_t * 5335 mac_add_vlan(mac_address_t *map, uint16_t vid) 5336 { 5337 mac_vlan_t *mvp; 5338 5339 /* 5340 * We should never add the same {addr, VID} tuple more 5341 * than once, but let's be sure. 5342 */ 5343 for (mvp = map->ma_vlans; mvp != NULL; mvp = mvp->mv_next) 5344 VERIFY3U(mvp->mv_vid, !=, vid); 5345 5346 /* Add the VLAN to the head of the VLAN list. */ 5347 mvp = kmem_zalloc(sizeof (mac_vlan_t), KM_SLEEP); 5348 mvp->mv_vid = vid; 5349 mvp->mv_next = map->ma_vlans; 5350 map->ma_vlans = mvp; 5351 5352 return (mvp); 5353 } 5354 5355 static void 5356 mac_rem_vlan(mac_address_t *map, mac_vlan_t *mvp) 5357 { 5358 mac_vlan_t *pre; 5359 5360 if (map->ma_vlans == mvp) { 5361 map->ma_vlans = mvp->mv_next; 5362 } else { 5363 pre = map->ma_vlans; 5364 while (pre->mv_next != mvp) { 5365 pre = pre->mv_next; 5366 5367 /* 5368 * We've reached the end of the list without 5369 * finding mvp. 5370 */ 5371 VERIFY3P(pre, !=, NULL); 5372 } 5373 pre->mv_next = mvp->mv_next; 5374 } 5375 5376 kmem_free(mvp, sizeof (mac_vlan_t)); 5377 } 5378 5379 /* 5380 * Create a new mac_address_t if this is the first use of the address 5381 * or add a VID to an existing address. In either case, the 5382 * mac_address_t acts as a list of {addr, VID} tuples where each tuple 5383 * shares the same addr. If group is non-NULL then attempt to program 5384 * the MAC's HW filters for this group. Otherwise, if group is NULL, 5385 * then the MAC has no rings and there is nothing to program. 5386 */ 5387 int 5388 mac_add_macaddr_vlan(mac_impl_t *mip, mac_group_t *group, uint8_t *addr, 5389 uint16_t vid, boolean_t use_hw) 5390 { 5391 mac_address_t *map; 5392 mac_vlan_t *mvp; 5393 int err = 0; 5394 boolean_t allocated_map = B_FALSE; 5395 boolean_t hw_mac = B_FALSE; 5396 boolean_t hw_vlan = B_FALSE; 5397 5398 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 5399 5400 map = mac_find_macaddr(mip, addr); 5401 5402 /* 5403 * If this is the first use of this MAC address then allocate 5404 * and initialize a new structure. 5405 */ 5406 if (map == NULL) { 5407 map = kmem_zalloc(sizeof (mac_address_t), KM_SLEEP); 5408 map->ma_len = mip->mi_type->mt_addr_length; 5409 bcopy(addr, map->ma_addr, map->ma_len); 5410 map->ma_nusers = 0; 5411 map->ma_group = group; 5412 map->ma_mip = mip; 5413 map->ma_untagged = B_FALSE; 5414 5415 /* Add the new MAC address to the head of the address list. */ 5416 map->ma_next = mip->mi_addresses; 5417 mip->mi_addresses = map; 5418 5419 allocated_map = B_TRUE; 5420 } 5421 5422 VERIFY(map->ma_group == NULL || map->ma_group == group); 5423 if (map->ma_group == NULL) 5424 map->ma_group = group; 5425 5426 if (vid == VLAN_ID_NONE) { 5427 map->ma_untagged = B_TRUE; 5428 mvp = NULL; 5429 } else { 5430 mvp = mac_add_vlan(map, vid); 5431 } 5432 5433 /* 5434 * Set the VLAN HW filter if: 5435 * 5436 * o the MAC's VLAN HW filtering is enabled, and 5437 * o the address does not currently rely on promisc mode. 5438 * 5439 * This is called even when the client specifies an untagged 5440 * address (VLAN_ID_NONE) because some MAC providers require 5441 * setting additional bits to accept untagged traffic when 5442 * VLAN HW filtering is enabled. 5443 */ 5444 if (MAC_GROUP_HW_VLAN(group) && 5445 map->ma_type != MAC_ADDRESS_TYPE_UNICAST_PROMISC) { 5446 if ((err = mac_group_addvlan(group, vid)) != 0) 5447 goto bail; 5448 5449 hw_vlan = B_TRUE; 5450 } 5451 5452 VERIFY3S(map->ma_nusers, >=, 0); 5453 map->ma_nusers++; 5454 5455 /* 5456 * If this MAC address already has a HW filter then simply 5457 * increment the counter. 5458 */ 5459 if (map->ma_nusers > 1) 5460 return (0); 5461 5462 /* 5463 * All logic from here on out is executed during initial 5464 * creation only. 5465 */ 5466 VERIFY3S(map->ma_nusers, ==, 1); 5467 5468 /* 5469 * Activate this MAC address by adding it to the reserved group. 5470 */ 5471 if (group != NULL) { 5472 err = mac_group_addmac(group, (const uint8_t *)addr); 5473 5474 /* 5475 * If the driver is out of filters then we can 5476 * continue and use promisc mode. For any other error, 5477 * assume the driver is in a state where we can't 5478 * program the filters or use promisc mode; so we must 5479 * bail. 5480 */ 5481 if (err != 0 && err != ENOSPC) { 5482 map->ma_nusers--; 5483 goto bail; 5484 } 5485 5486 hw_mac = (err == 0); 5487 } 5488 5489 if (hw_mac) { 5490 map->ma_type = MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED; 5491 return (0); 5492 } 5493 5494 /* 5495 * The MAC address addition failed. If the client requires a 5496 * hardware classified MAC address, fail the operation. This 5497 * feature is only used by sun4v vsw. 5498 */ 5499 if (use_hw && !hw_mac) { 5500 err = ENOSPC; 5501 map->ma_nusers--; 5502 goto bail; 5503 } 5504 5505 /* 5506 * If we reach this point then either the MAC doesn't have 5507 * RINGS capability or we are out of MAC address HW filters. 5508 * In any case we must put the MAC into promiscuous mode. 5509 */ 5510 VERIFY(group == NULL || !hw_mac); 5511 5512 /* 5513 * The one exception is the primary address. A non-RINGS 5514 * driver filters the primary address by default; promisc mode 5515 * is not needed. 5516 */ 5517 if ((group == NULL) && 5518 (bcmp(map->ma_addr, mip->mi_addr, map->ma_len) == 0)) { 5519 map->ma_type = MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED; 5520 return (0); 5521 } 5522 5523 /* 5524 * Enable promiscuous mode in order to receive traffic to the 5525 * new MAC address. All existing HW filters still send their 5526 * traffic to their respective group/SRSes. But with promisc 5527 * enabled all unknown traffic is delivered to the default 5528 * group where it is SW classified via mac_rx_classify(). 5529 */ 5530 if ((err = i_mac_promisc_set(mip, B_TRUE)) == 0) { 5531 map->ma_type = MAC_ADDRESS_TYPE_UNICAST_PROMISC; 5532 return (0); 5533 } 5534 5535 /* 5536 * We failed to set promisc mode and we are about to free 'map'. 5537 */ 5538 map->ma_nusers = 0; 5539 5540 bail: 5541 if (hw_vlan) { 5542 int err2 = mac_group_remvlan(group, vid); 5543 5544 if (err2 != 0) { 5545 cmn_err(CE_WARN, "Failed to remove VLAN %u from group" 5546 " %d on MAC %s: %d.", vid, group->mrg_index, 5547 mip->mi_name, err2); 5548 } 5549 } 5550 5551 if (mvp != NULL) 5552 mac_rem_vlan(map, mvp); 5553 5554 if (allocated_map) 5555 mac_free_macaddr(map); 5556 5557 return (err); 5558 } 5559 5560 int 5561 mac_remove_macaddr_vlan(mac_address_t *map, uint16_t vid) 5562 { 5563 mac_vlan_t *mvp; 5564 mac_impl_t *mip = map->ma_mip; 5565 mac_group_t *group = map->ma_group; 5566 int err = 0; 5567 5568 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 5569 VERIFY3P(map, ==, mac_find_macaddr(mip, map->ma_addr)); 5570 5571 if (vid == VLAN_ID_NONE) { 5572 map->ma_untagged = B_FALSE; 5573 mvp = NULL; 5574 } else { 5575 mvp = mac_find_vlan(map, vid); 5576 VERIFY3P(mvp, !=, NULL); 5577 } 5578 5579 if (MAC_GROUP_HW_VLAN(group) && 5580 map->ma_type == MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED && 5581 ((err = mac_group_remvlan(group, vid)) != 0)) 5582 return (err); 5583 5584 if (mvp != NULL) 5585 mac_rem_vlan(map, mvp); 5586 5587 /* 5588 * If it's not the last client using this MAC address, only update 5589 * the MAC clients count. 5590 */ 5591 map->ma_nusers--; 5592 if (map->ma_nusers > 0) 5593 return (0); 5594 5595 VERIFY3S(map->ma_nusers, ==, 0); 5596 5597 /* 5598 * The MAC address is no longer used by any MAC client, so 5599 * remove it from its associated group. Turn off promiscuous 5600 * mode if this is the last address relying on it. 5601 */ 5602 switch (map->ma_type) { 5603 case MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED: 5604 /* 5605 * Don't free the preset primary address for drivers that 5606 * don't advertise RINGS capability. 5607 */ 5608 if (group == NULL) 5609 return (0); 5610 5611 if ((err = mac_group_remmac(group, map->ma_addr)) != 0) { 5612 if (vid == VLAN_ID_NONE) 5613 map->ma_untagged = B_TRUE; 5614 else 5615 (void) mac_add_vlan(map, vid); 5616 5617 /* 5618 * If we fail to remove the MAC address HW 5619 * filter but then also fail to re-add the 5620 * VLAN HW filter then we are in a busted 5621 * state. We do our best by logging a warning 5622 * and returning the original 'err' that got 5623 * us here. At this point, traffic for this 5624 * address + VLAN combination will be dropped 5625 * until the user reboots the system. In the 5626 * future, it would be nice to have a system 5627 * that can compare the state of expected 5628 * classification according to mac to the 5629 * actual state of the provider, and report 5630 * and fix any inconsistencies. 5631 */ 5632 if (MAC_GROUP_HW_VLAN(group)) { 5633 int err2; 5634 5635 err2 = mac_group_addvlan(group, vid); 5636 if (err2 != 0) { 5637 cmn_err(CE_WARN, "Failed to readd VLAN" 5638 " %u to group %d on MAC %s: %d.", 5639 vid, group->mrg_index, mip->mi_name, 5640 err2); 5641 } 5642 } 5643 5644 map->ma_nusers = 1; 5645 return (err); 5646 } 5647 5648 map->ma_group = NULL; 5649 break; 5650 case MAC_ADDRESS_TYPE_UNICAST_PROMISC: 5651 err = i_mac_promisc_set(mip, B_FALSE); 5652 break; 5653 default: 5654 panic("Unexpected ma_type 0x%x, file: %s, line %d", 5655 map->ma_type, __FILE__, __LINE__); 5656 } 5657 5658 if (err != 0) { 5659 map->ma_nusers = 1; 5660 return (err); 5661 } 5662 5663 /* 5664 * We created MAC address for the primary one at registration, so we 5665 * won't free it here. mac_fini_macaddr() will take care of it. 5666 */ 5667 if (bcmp(map->ma_addr, mip->mi_addr, map->ma_len) != 0) 5668 mac_free_macaddr(map); 5669 5670 return (0); 5671 } 5672 5673 /* 5674 * Update an existing MAC address. The caller need to make sure that the new 5675 * value has not been used. 5676 */ 5677 int 5678 mac_update_macaddr(mac_address_t *map, uint8_t *mac_addr) 5679 { 5680 mac_impl_t *mip = map->ma_mip; 5681 int err = 0; 5682 5683 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 5684 ASSERT(mac_find_macaddr(mip, mac_addr) == NULL); 5685 5686 switch (map->ma_type) { 5687 case MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED: 5688 /* 5689 * Update the primary address for drivers that are not 5690 * RINGS capable. 5691 */ 5692 if (mip->mi_rx_groups == NULL) { 5693 err = mip->mi_unicst(mip->mi_driver, (const uint8_t *) 5694 mac_addr); 5695 if (err != 0) 5696 return (err); 5697 break; 5698 } 5699 5700 /* 5701 * If this MAC address is not currently in use, 5702 * simply break out and update the value. 5703 */ 5704 if (map->ma_nusers == 0) 5705 break; 5706 5707 /* 5708 * Need to replace the MAC address associated with a group. 5709 */ 5710 err = mac_group_remmac(map->ma_group, map->ma_addr); 5711 if (err != 0) 5712 return (err); 5713 5714 err = mac_group_addmac(map->ma_group, mac_addr); 5715 5716 /* 5717 * Failure hints hardware error. The MAC layer needs to 5718 * have error notification facility to handle this. 5719 * Now, simply try to restore the value. 5720 */ 5721 if (err != 0) 5722 (void) mac_group_addmac(map->ma_group, map->ma_addr); 5723 5724 break; 5725 case MAC_ADDRESS_TYPE_UNICAST_PROMISC: 5726 /* 5727 * Need to do nothing more if in promiscuous mode. 5728 */ 5729 break; 5730 default: 5731 ASSERT(B_FALSE); 5732 } 5733 5734 /* 5735 * Successfully replaced the MAC address. 5736 */ 5737 if (err == 0) 5738 bcopy(mac_addr, map->ma_addr, map->ma_len); 5739 5740 return (err); 5741 } 5742 5743 /* 5744 * Freshen the MAC address with new value. Its caller must have updated the 5745 * hardware MAC address before calling this function. 5746 * This funcitons is supposed to be used to handle the MAC address change 5747 * notification from underlying drivers. 5748 */ 5749 void 5750 mac_freshen_macaddr(mac_address_t *map, uint8_t *mac_addr) 5751 { 5752 mac_impl_t *mip = map->ma_mip; 5753 5754 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 5755 ASSERT(mac_find_macaddr(mip, mac_addr) == NULL); 5756 5757 /* 5758 * Freshen the MAC address with new value. 5759 */ 5760 bcopy(mac_addr, map->ma_addr, map->ma_len); 5761 bcopy(mac_addr, mip->mi_addr, map->ma_len); 5762 5763 /* 5764 * Update all MAC clients that share this MAC address. 5765 */ 5766 mac_unicast_update_clients(mip, map); 5767 } 5768 5769 /* 5770 * Set up the primary MAC address. 5771 */ 5772 void 5773 mac_init_macaddr(mac_impl_t *mip) 5774 { 5775 mac_address_t *map; 5776 5777 /* 5778 * The reference count is initialized to zero, until it's really 5779 * activated. 5780 */ 5781 map = kmem_zalloc(sizeof (mac_address_t), KM_SLEEP); 5782 map->ma_len = mip->mi_type->mt_addr_length; 5783 bcopy(mip->mi_addr, map->ma_addr, map->ma_len); 5784 5785 /* 5786 * If driver advertises RINGS capability, it shouldn't have initialized 5787 * its primary MAC address. For other drivers, including VNIC, the 5788 * primary address must work after registration. 5789 */ 5790 if (mip->mi_rx_groups == NULL) 5791 map->ma_type = MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED; 5792 5793 map->ma_mip = mip; 5794 5795 mip->mi_addresses = map; 5796 } 5797 5798 /* 5799 * Clean up the primary MAC address. Note, only one primary MAC address 5800 * is allowed. All other MAC addresses must have been freed appropriately. 5801 */ 5802 void 5803 mac_fini_macaddr(mac_impl_t *mip) 5804 { 5805 mac_address_t *map = mip->mi_addresses; 5806 5807 if (map == NULL) 5808 return; 5809 5810 /* 5811 * If mi_addresses is initialized, there should be exactly one 5812 * entry left on the list with no users. 5813 */ 5814 VERIFY3S(map->ma_nusers, ==, 0); 5815 VERIFY3P(map->ma_next, ==, NULL); 5816 VERIFY3P(map->ma_vlans, ==, NULL); 5817 5818 kmem_free(map, sizeof (mac_address_t)); 5819 mip->mi_addresses = NULL; 5820 } 5821 5822 /* 5823 * Logging related functions. 5824 * 5825 * Note that Kernel statistics have been extended to maintain fine 5826 * granularity of statistics viz. hardware lane, software lane, fanout 5827 * stats etc. However, extended accounting continues to support only 5828 * aggregate statistics like before. 5829 */ 5830 5831 /* Write the flow description to a netinfo_t record */ 5832 static netinfo_t * 5833 mac_write_flow_desc(flow_entry_t *flent, mac_client_impl_t *mcip) 5834 { 5835 netinfo_t *ninfo; 5836 net_desc_t *ndesc; 5837 flow_desc_t *fdesc; 5838 mac_resource_props_t *mrp; 5839 5840 ninfo = kmem_zalloc(sizeof (netinfo_t), KM_NOSLEEP); 5841 if (ninfo == NULL) 5842 return (NULL); 5843 ndesc = kmem_zalloc(sizeof (net_desc_t), KM_NOSLEEP); 5844 if (ndesc == NULL) { 5845 kmem_free(ninfo, sizeof (netinfo_t)); 5846 return (NULL); 5847 } 5848 5849 /* 5850 * Grab the fe_lock to see a self-consistent fe_flow_desc. 5851 * Updates to the fe_flow_desc are done under the fe_lock 5852 */ 5853 mutex_enter(&flent->fe_lock); 5854 fdesc = &flent->fe_flow_desc; 5855 mrp = &flent->fe_resource_props; 5856 5857 ndesc->nd_name = flent->fe_flow_name; 5858 ndesc->nd_devname = mcip->mci_name; 5859 bcopy(fdesc->fd_src_mac, ndesc->nd_ehost, ETHERADDRL); 5860 bcopy(fdesc->fd_dst_mac, ndesc->nd_edest, ETHERADDRL); 5861 ndesc->nd_sap = htonl(fdesc->fd_sap); 5862 ndesc->nd_isv4 = (uint8_t)fdesc->fd_ipversion == IPV4_VERSION; 5863 ndesc->nd_bw_limit = mrp->mrp_maxbw; 5864 if (ndesc->nd_isv4) { 5865 ndesc->nd_saddr[3] = htonl(fdesc->fd_local_addr.s6_addr32[3]); 5866 ndesc->nd_daddr[3] = htonl(fdesc->fd_remote_addr.s6_addr32[3]); 5867 } else { 5868 bcopy(&fdesc->fd_local_addr, ndesc->nd_saddr, IPV6_ADDR_LEN); 5869 bcopy(&fdesc->fd_remote_addr, ndesc->nd_daddr, IPV6_ADDR_LEN); 5870 } 5871 ndesc->nd_sport = htons(fdesc->fd_local_port); 5872 ndesc->nd_dport = htons(fdesc->fd_remote_port); 5873 ndesc->nd_protocol = (uint8_t)fdesc->fd_protocol; 5874 mutex_exit(&flent->fe_lock); 5875 5876 ninfo->ni_record = ndesc; 5877 ninfo->ni_size = sizeof (net_desc_t); 5878 ninfo->ni_type = EX_NET_FLDESC_REC; 5879 5880 return (ninfo); 5881 } 5882 5883 /* Write the flow statistics to a netinfo_t record */ 5884 static netinfo_t * 5885 mac_write_flow_stats(flow_entry_t *flent) 5886 { 5887 netinfo_t *ninfo; 5888 net_stat_t *nstat; 5889 mac_soft_ring_set_t *mac_srs; 5890 mac_rx_stats_t *mac_rx_stat; 5891 mac_tx_stats_t *mac_tx_stat; 5892 int i; 5893 5894 ninfo = kmem_zalloc(sizeof (netinfo_t), KM_NOSLEEP); 5895 if (ninfo == NULL) 5896 return (NULL); 5897 nstat = kmem_zalloc(sizeof (net_stat_t), KM_NOSLEEP); 5898 if (nstat == NULL) { 5899 kmem_free(ninfo, sizeof (netinfo_t)); 5900 return (NULL); 5901 } 5902 5903 nstat->ns_name = flent->fe_flow_name; 5904 for (i = 0; i < flent->fe_rx_srs_cnt; i++) { 5905 mac_srs = (mac_soft_ring_set_t *)flent->fe_rx_srs[i]; 5906 mac_rx_stat = &mac_srs->srs_rx.sr_stat; 5907 5908 nstat->ns_ibytes += mac_rx_stat->mrs_intrbytes + 5909 mac_rx_stat->mrs_pollbytes + mac_rx_stat->mrs_lclbytes; 5910 nstat->ns_ipackets += mac_rx_stat->mrs_intrcnt + 5911 mac_rx_stat->mrs_pollcnt + mac_rx_stat->mrs_lclcnt; 5912 nstat->ns_oerrors += mac_rx_stat->mrs_ierrors; 5913 } 5914 5915 mac_srs = (mac_soft_ring_set_t *)(flent->fe_tx_srs); 5916 if (mac_srs != NULL) { 5917 mac_tx_stat = &mac_srs->srs_tx.st_stat; 5918 5919 nstat->ns_obytes = mac_tx_stat->mts_obytes; 5920 nstat->ns_opackets = mac_tx_stat->mts_opackets; 5921 nstat->ns_oerrors = mac_tx_stat->mts_oerrors; 5922 } 5923 5924 ninfo->ni_record = nstat; 5925 ninfo->ni_size = sizeof (net_stat_t); 5926 ninfo->ni_type = EX_NET_FLSTAT_REC; 5927 5928 return (ninfo); 5929 } 5930 5931 /* Write the link description to a netinfo_t record */ 5932 static netinfo_t * 5933 mac_write_link_desc(mac_client_impl_t *mcip) 5934 { 5935 netinfo_t *ninfo; 5936 net_desc_t *ndesc; 5937 flow_entry_t *flent = mcip->mci_flent; 5938 5939 ninfo = kmem_zalloc(sizeof (netinfo_t), KM_NOSLEEP); 5940 if (ninfo == NULL) 5941 return (NULL); 5942 ndesc = kmem_zalloc(sizeof (net_desc_t), KM_NOSLEEP); 5943 if (ndesc == NULL) { 5944 kmem_free(ninfo, sizeof (netinfo_t)); 5945 return (NULL); 5946 } 5947 5948 ndesc->nd_name = mcip->mci_name; 5949 ndesc->nd_devname = mcip->mci_name; 5950 ndesc->nd_isv4 = B_TRUE; 5951 /* 5952 * Grab the fe_lock to see a self-consistent fe_flow_desc. 5953 * Updates to the fe_flow_desc are done under the fe_lock 5954 * after removing the flent from the flow table. 5955 */ 5956 mutex_enter(&flent->fe_lock); 5957 bcopy(flent->fe_flow_desc.fd_src_mac, ndesc->nd_ehost, ETHERADDRL); 5958 mutex_exit(&flent->fe_lock); 5959 5960 ninfo->ni_record = ndesc; 5961 ninfo->ni_size = sizeof (net_desc_t); 5962 ninfo->ni_type = EX_NET_LNDESC_REC; 5963 5964 return (ninfo); 5965 } 5966 5967 /* Write the link statistics to a netinfo_t record */ 5968 static netinfo_t * 5969 mac_write_link_stats(mac_client_impl_t *mcip) 5970 { 5971 netinfo_t *ninfo; 5972 net_stat_t *nstat; 5973 flow_entry_t *flent; 5974 mac_soft_ring_set_t *mac_srs; 5975 mac_rx_stats_t *mac_rx_stat; 5976 mac_tx_stats_t *mac_tx_stat; 5977 int i; 5978 5979 ninfo = kmem_zalloc(sizeof (netinfo_t), KM_NOSLEEP); 5980 if (ninfo == NULL) 5981 return (NULL); 5982 nstat = kmem_zalloc(sizeof (net_stat_t), KM_NOSLEEP); 5983 if (nstat == NULL) { 5984 kmem_free(ninfo, sizeof (netinfo_t)); 5985 return (NULL); 5986 } 5987 5988 nstat->ns_name = mcip->mci_name; 5989 flent = mcip->mci_flent; 5990 if (flent != NULL) { 5991 for (i = 0; i < flent->fe_rx_srs_cnt; i++) { 5992 mac_srs = (mac_soft_ring_set_t *)flent->fe_rx_srs[i]; 5993 mac_rx_stat = &mac_srs->srs_rx.sr_stat; 5994 5995 nstat->ns_ibytes += mac_rx_stat->mrs_intrbytes + 5996 mac_rx_stat->mrs_pollbytes + 5997 mac_rx_stat->mrs_lclbytes; 5998 nstat->ns_ipackets += mac_rx_stat->mrs_intrcnt + 5999 mac_rx_stat->mrs_pollcnt + mac_rx_stat->mrs_lclcnt; 6000 nstat->ns_oerrors += mac_rx_stat->mrs_ierrors; 6001 } 6002 } 6003 6004 mac_srs = (mac_soft_ring_set_t *)(mcip->mci_flent->fe_tx_srs); 6005 if (mac_srs != NULL) { 6006 mac_tx_stat = &mac_srs->srs_tx.st_stat; 6007 6008 nstat->ns_obytes = mac_tx_stat->mts_obytes; 6009 nstat->ns_opackets = mac_tx_stat->mts_opackets; 6010 nstat->ns_oerrors = mac_tx_stat->mts_oerrors; 6011 } 6012 6013 ninfo->ni_record = nstat; 6014 ninfo->ni_size = sizeof (net_stat_t); 6015 ninfo->ni_type = EX_NET_LNSTAT_REC; 6016 6017 return (ninfo); 6018 } 6019 6020 typedef struct i_mac_log_state_s { 6021 boolean_t mi_last; 6022 int mi_fenable; 6023 int mi_lenable; 6024 list_t *mi_list; 6025 } i_mac_log_state_t; 6026 6027 /* 6028 * For a given flow, if the description has not been logged before, do it now. 6029 * If it is a VNIC, then we have collected information about it from the MAC 6030 * table, so skip it. 6031 * 6032 * Called through mac_flow_walk_nolock() 6033 * 6034 * Return 0 if successful. 6035 */ 6036 static int 6037 mac_log_flowinfo(flow_entry_t *flent, void *arg) 6038 { 6039 mac_client_impl_t *mcip = flent->fe_mcip; 6040 i_mac_log_state_t *lstate = arg; 6041 netinfo_t *ninfo; 6042 6043 if (mcip == NULL) 6044 return (0); 6045 6046 /* 6047 * If the name starts with "vnic", and fe_user_generated is true (to 6048 * exclude the mcast and active flow entries created implicitly for 6049 * a vnic, it is a VNIC flow. i.e. vnic1 is a vnic flow, 6050 * vnic/bge1/mcast1 is not and neither is vnic/bge1/active. 6051 */ 6052 if (strncasecmp(flent->fe_flow_name, "vnic", 4) == 0 && 6053 (flent->fe_type & FLOW_USER) != 0) { 6054 return (0); 6055 } 6056 6057 if (!flent->fe_desc_logged) { 6058 /* 6059 * We don't return error because we want to continue the 6060 * walk in case this is the last walk which means we 6061 * need to reset fe_desc_logged in all the flows. 6062 */ 6063 if ((ninfo = mac_write_flow_desc(flent, mcip)) == NULL) 6064 return (0); 6065 list_insert_tail(lstate->mi_list, ninfo); 6066 flent->fe_desc_logged = B_TRUE; 6067 } 6068 6069 /* 6070 * Regardless of the error, we want to proceed in case we have to 6071 * reset fe_desc_logged. 6072 */ 6073 ninfo = mac_write_flow_stats(flent); 6074 if (ninfo == NULL) 6075 return (-1); 6076 6077 list_insert_tail(lstate->mi_list, ninfo); 6078 6079 if (mcip != NULL && !(mcip->mci_state_flags & MCIS_DESC_LOGGED)) 6080 flent->fe_desc_logged = B_FALSE; 6081 6082 return (0); 6083 } 6084 6085 /* 6086 * Log the description for each mac client of this mac_impl_t, if it 6087 * hasn't already been done. Additionally, log statistics for the link as 6088 * well. Walk the flow table and log information for each flow as well. 6089 * If it is the last walk (mci_last), then we turn off mci_desc_logged (and 6090 * also fe_desc_logged, if flow logging is on) since we want to log the 6091 * description if and when logging is restarted. 6092 * 6093 * Return 0 upon success or -1 upon failure 6094 */ 6095 static int 6096 i_mac_impl_log(mac_impl_t *mip, i_mac_log_state_t *lstate) 6097 { 6098 mac_client_impl_t *mcip; 6099 netinfo_t *ninfo; 6100 6101 i_mac_perim_enter(mip); 6102 /* 6103 * Only walk the client list for NIC and etherstub 6104 */ 6105 if ((mip->mi_state_flags & MIS_DISABLED) || 6106 ((mip->mi_state_flags & MIS_IS_VNIC) && 6107 (mac_get_lower_mac_handle((mac_handle_t)mip) != NULL))) { 6108 i_mac_perim_exit(mip); 6109 return (0); 6110 } 6111 6112 for (mcip = mip->mi_clients_list; mcip != NULL; 6113 mcip = mcip->mci_client_next) { 6114 if (!MCIP_DATAPATH_SETUP(mcip)) 6115 continue; 6116 if (lstate->mi_lenable) { 6117 if (!(mcip->mci_state_flags & MCIS_DESC_LOGGED)) { 6118 ninfo = mac_write_link_desc(mcip); 6119 if (ninfo == NULL) { 6120 /* 6121 * We can't terminate it if this is the last 6122 * walk, else there might be some links with 6123 * mi_desc_logged set to true, which means 6124 * their description won't be logged the next 6125 * time logging is started (similarly for the 6126 * flows within such links). We can continue 6127 * without walking the flow table (i.e. to 6128 * set fe_desc_logged to false) because we 6129 * won't have written any flow stuff for this 6130 * link as we haven't logged the link itself. 6131 */ 6132 i_mac_perim_exit(mip); 6133 if (lstate->mi_last) 6134 return (0); 6135 else 6136 return (-1); 6137 } 6138 mcip->mci_state_flags |= MCIS_DESC_LOGGED; 6139 list_insert_tail(lstate->mi_list, ninfo); 6140 } 6141 } 6142 6143 ninfo = mac_write_link_stats(mcip); 6144 if (ninfo == NULL && !lstate->mi_last) { 6145 i_mac_perim_exit(mip); 6146 return (-1); 6147 } 6148 list_insert_tail(lstate->mi_list, ninfo); 6149 6150 if (lstate->mi_last) 6151 mcip->mci_state_flags &= ~MCIS_DESC_LOGGED; 6152 6153 if (lstate->mi_fenable) { 6154 if (mcip->mci_subflow_tab != NULL) { 6155 (void) mac_flow_walk_nolock( 6156 mcip->mci_subflow_tab, mac_log_flowinfo, 6157 lstate); 6158 } 6159 } 6160 } 6161 i_mac_perim_exit(mip); 6162 return (0); 6163 } 6164 6165 /* 6166 * modhash walker function to add a mac_impl_t to a list 6167 */ 6168 /*ARGSUSED*/ 6169 static uint_t 6170 i_mac_impl_list_walker(mod_hash_key_t key, mod_hash_val_t *val, void *arg) 6171 { 6172 list_t *list = (list_t *)arg; 6173 mac_impl_t *mip = (mac_impl_t *)val; 6174 6175 if ((mip->mi_state_flags & MIS_DISABLED) == 0) { 6176 list_insert_tail(list, mip); 6177 mip->mi_ref++; 6178 } 6179 6180 return (MH_WALK_CONTINUE); 6181 } 6182 6183 void 6184 i_mac_log_info(list_t *net_log_list, i_mac_log_state_t *lstate) 6185 { 6186 list_t mac_impl_list; 6187 mac_impl_t *mip; 6188 netinfo_t *ninfo; 6189 6190 /* Create list of mac_impls */ 6191 ASSERT(RW_LOCK_HELD(&i_mac_impl_lock)); 6192 list_create(&mac_impl_list, sizeof (mac_impl_t), offsetof(mac_impl_t, 6193 mi_node)); 6194 mod_hash_walk(i_mac_impl_hash, i_mac_impl_list_walker, &mac_impl_list); 6195 rw_exit(&i_mac_impl_lock); 6196 6197 /* Create log entries for each mac_impl */ 6198 for (mip = list_head(&mac_impl_list); mip != NULL; 6199 mip = list_next(&mac_impl_list, mip)) { 6200 if (i_mac_impl_log(mip, lstate) != 0) 6201 continue; 6202 } 6203 6204 /* Remove elements and destroy list of mac_impls */ 6205 rw_enter(&i_mac_impl_lock, RW_WRITER); 6206 while ((mip = list_remove_tail(&mac_impl_list)) != NULL) { 6207 mip->mi_ref--; 6208 } 6209 rw_exit(&i_mac_impl_lock); 6210 list_destroy(&mac_impl_list); 6211 6212 /* 6213 * Write log entries to files outside of locks, free associated 6214 * structures, and remove entries from the list. 6215 */ 6216 while ((ninfo = list_head(net_log_list)) != NULL) { 6217 (void) exacct_commit_netinfo(ninfo->ni_record, ninfo->ni_type); 6218 list_remove(net_log_list, ninfo); 6219 kmem_free(ninfo->ni_record, ninfo->ni_size); 6220 kmem_free(ninfo, sizeof (*ninfo)); 6221 } 6222 list_destroy(net_log_list); 6223 } 6224 6225 /* 6226 * The timer thread that runs every mac_logging_interval seconds and logs 6227 * link and/or flow information. 6228 */ 6229 /* ARGSUSED */ 6230 void 6231 mac_log_linkinfo(void *arg) 6232 { 6233 i_mac_log_state_t lstate; 6234 list_t net_log_list; 6235 6236 list_create(&net_log_list, sizeof (netinfo_t), 6237 offsetof(netinfo_t, ni_link)); 6238 6239 rw_enter(&i_mac_impl_lock, RW_READER); 6240 if (!mac_flow_log_enable && !mac_link_log_enable) { 6241 rw_exit(&i_mac_impl_lock); 6242 return; 6243 } 6244 lstate.mi_fenable = mac_flow_log_enable; 6245 lstate.mi_lenable = mac_link_log_enable; 6246 lstate.mi_last = B_FALSE; 6247 lstate.mi_list = &net_log_list; 6248 6249 /* Write log entries for each mac_impl in the list */ 6250 i_mac_log_info(&net_log_list, &lstate); 6251 6252 if (mac_flow_log_enable || mac_link_log_enable) { 6253 mac_logging_timer = timeout(mac_log_linkinfo, NULL, 6254 SEC_TO_TICK(mac_logging_interval)); 6255 } 6256 } 6257 6258 typedef struct i_mac_fastpath_state_s { 6259 boolean_t mf_disable; 6260 int mf_err; 6261 } i_mac_fastpath_state_t; 6262 6263 /* modhash walker function to enable or disable fastpath */ 6264 /*ARGSUSED*/ 6265 static uint_t 6266 i_mac_fastpath_walker(mod_hash_key_t key, mod_hash_val_t *val, 6267 void *arg) 6268 { 6269 i_mac_fastpath_state_t *state = arg; 6270 mac_handle_t mh = (mac_handle_t)val; 6271 6272 if (state->mf_disable) 6273 state->mf_err = mac_fastpath_disable(mh); 6274 else 6275 mac_fastpath_enable(mh); 6276 6277 return (state->mf_err == 0 ? MH_WALK_CONTINUE : MH_WALK_TERMINATE); 6278 } 6279 6280 /* 6281 * Start the logging timer. 6282 */ 6283 int 6284 mac_start_logusage(mac_logtype_t type, uint_t interval) 6285 { 6286 i_mac_fastpath_state_t dstate = {B_TRUE, 0}; 6287 i_mac_fastpath_state_t estate = {B_FALSE, 0}; 6288 int err; 6289 6290 rw_enter(&i_mac_impl_lock, RW_WRITER); 6291 switch (type) { 6292 case MAC_LOGTYPE_FLOW: 6293 if (mac_flow_log_enable) { 6294 rw_exit(&i_mac_impl_lock); 6295 return (0); 6296 } 6297 /* FALLTHRU */ 6298 case MAC_LOGTYPE_LINK: 6299 if (mac_link_log_enable) { 6300 rw_exit(&i_mac_impl_lock); 6301 return (0); 6302 } 6303 break; 6304 default: 6305 ASSERT(0); 6306 } 6307 6308 /* Disable fastpath */ 6309 mod_hash_walk(i_mac_impl_hash, i_mac_fastpath_walker, &dstate); 6310 if ((err = dstate.mf_err) != 0) { 6311 /* Reenable fastpath */ 6312 mod_hash_walk(i_mac_impl_hash, i_mac_fastpath_walker, &estate); 6313 rw_exit(&i_mac_impl_lock); 6314 return (err); 6315 } 6316 6317 switch (type) { 6318 case MAC_LOGTYPE_FLOW: 6319 mac_flow_log_enable = B_TRUE; 6320 /* FALLTHRU */ 6321 case MAC_LOGTYPE_LINK: 6322 mac_link_log_enable = B_TRUE; 6323 break; 6324 } 6325 6326 mac_logging_interval = interval; 6327 rw_exit(&i_mac_impl_lock); 6328 mac_log_linkinfo(NULL); 6329 return (0); 6330 } 6331 6332 /* 6333 * Stop the logging timer if both link and flow logging are turned off. 6334 */ 6335 void 6336 mac_stop_logusage(mac_logtype_t type) 6337 { 6338 i_mac_log_state_t lstate; 6339 i_mac_fastpath_state_t estate = {B_FALSE, 0}; 6340 list_t net_log_list; 6341 6342 list_create(&net_log_list, sizeof (netinfo_t), 6343 offsetof(netinfo_t, ni_link)); 6344 6345 rw_enter(&i_mac_impl_lock, RW_WRITER); 6346 6347 lstate.mi_fenable = mac_flow_log_enable; 6348 lstate.mi_lenable = mac_link_log_enable; 6349 lstate.mi_list = &net_log_list; 6350 6351 /* Last walk */ 6352 lstate.mi_last = B_TRUE; 6353 6354 switch (type) { 6355 case MAC_LOGTYPE_FLOW: 6356 if (lstate.mi_fenable) { 6357 ASSERT(mac_link_log_enable); 6358 mac_flow_log_enable = B_FALSE; 6359 mac_link_log_enable = B_FALSE; 6360 break; 6361 } 6362 /* FALLTHRU */ 6363 case MAC_LOGTYPE_LINK: 6364 if (!lstate.mi_lenable || mac_flow_log_enable) { 6365 rw_exit(&i_mac_impl_lock); 6366 return; 6367 } 6368 mac_link_log_enable = B_FALSE; 6369 break; 6370 default: 6371 ASSERT(0); 6372 } 6373 6374 /* Reenable fastpath */ 6375 mod_hash_walk(i_mac_impl_hash, i_mac_fastpath_walker, &estate); 6376 6377 (void) untimeout(mac_logging_timer); 6378 mac_logging_timer = NULL; 6379 6380 /* Write log entries for each mac_impl in the list */ 6381 i_mac_log_info(&net_log_list, &lstate); 6382 } 6383 6384 /* 6385 * Walk the rx and tx SRS/SRs for a flow and update the priority value. 6386 */ 6387 void 6388 mac_flow_update_priority(mac_client_impl_t *mcip, flow_entry_t *flent) 6389 { 6390 pri_t pri; 6391 int count; 6392 mac_soft_ring_set_t *mac_srs; 6393 6394 if (flent->fe_rx_srs_cnt <= 0) 6395 return; 6396 6397 if (((mac_soft_ring_set_t *)flent->fe_rx_srs[0])->srs_type == 6398 SRST_FLOW) { 6399 pri = FLOW_PRIORITY(mcip->mci_min_pri, 6400 mcip->mci_max_pri, 6401 flent->fe_resource_props.mrp_priority); 6402 } else { 6403 pri = mcip->mci_max_pri; 6404 } 6405 6406 for (count = 0; count < flent->fe_rx_srs_cnt; count++) { 6407 mac_srs = flent->fe_rx_srs[count]; 6408 mac_update_srs_priority(mac_srs, pri); 6409 } 6410 /* 6411 * If we have a Tx SRS, we need to modify all the threads associated 6412 * with it. 6413 */ 6414 if (flent->fe_tx_srs != NULL) 6415 mac_update_srs_priority(flent->fe_tx_srs, pri); 6416 } 6417 6418 /* 6419 * RX and TX rings are reserved according to different semantics depending 6420 * on the requests from the MAC clients and type of rings: 6421 * 6422 * On the Tx side, by default we reserve individual rings, independently from 6423 * the groups. 6424 * 6425 * On the Rx side, the reservation is at the granularity of the group 6426 * of rings, and used for v12n level 1 only. It has a special case for the 6427 * primary client. 6428 * 6429 * If a share is allocated to a MAC client, we allocate a TX group and an 6430 * RX group to the client, and assign TX rings and RX rings to these 6431 * groups according to information gathered from the driver through 6432 * the share capability. 6433 * 6434 * The foreseable evolution of Rx rings will handle v12n level 2 and higher 6435 * to allocate individual rings out of a group and program the hw classifier 6436 * based on IP address or higher level criteria. 6437 */ 6438 6439 /* 6440 * mac_reserve_tx_ring() 6441 * Reserve a unused ring by marking it with MR_INUSE state. 6442 * As reserved, the ring is ready to function. 6443 * 6444 * Notes for Hybrid I/O: 6445 * 6446 * If a specific ring is needed, it is specified through the desired_ring 6447 * argument. Otherwise that argument is set to NULL. 6448 * If the desired ring was previous allocated to another client, this 6449 * function swaps it with a new ring from the group of unassigned rings. 6450 */ 6451 mac_ring_t * 6452 mac_reserve_tx_ring(mac_impl_t *mip, mac_ring_t *desired_ring) 6453 { 6454 mac_group_t *group; 6455 mac_grp_client_t *mgcp; 6456 mac_client_impl_t *mcip; 6457 mac_soft_ring_set_t *srs; 6458 6459 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 6460 6461 /* 6462 * Find an available ring and start it before changing its status. 6463 * The unassigned rings are at the end of the mi_tx_groups 6464 * array. 6465 */ 6466 group = MAC_DEFAULT_TX_GROUP(mip); 6467 6468 /* Can't take the default ring out of the default group */ 6469 ASSERT(desired_ring != (mac_ring_t *)mip->mi_default_tx_ring); 6470 6471 if (desired_ring->mr_state == MR_FREE) { 6472 ASSERT(MAC_GROUP_NO_CLIENT(group)); 6473 if (mac_start_ring(desired_ring) != 0) 6474 return (NULL); 6475 return (desired_ring); 6476 } 6477 /* 6478 * There are clients using this ring, so let's move the clients 6479 * away from using this ring. 6480 */ 6481 for (mgcp = group->mrg_clients; mgcp != NULL; mgcp = mgcp->mgc_next) { 6482 mcip = mgcp->mgc_client; 6483 mac_tx_client_quiesce((mac_client_handle_t)mcip); 6484 srs = MCIP_TX_SRS(mcip); 6485 ASSERT(mac_tx_srs_ring_present(srs, desired_ring)); 6486 mac_tx_invoke_callbacks(mcip, 6487 (mac_tx_cookie_t)mac_tx_srs_get_soft_ring(srs, 6488 desired_ring)); 6489 mac_tx_srs_del_ring(srs, desired_ring); 6490 mac_tx_client_restart((mac_client_handle_t)mcip); 6491 } 6492 return (desired_ring); 6493 } 6494 6495 /* 6496 * For a non-default group with multiple clients, return the primary client. 6497 */ 6498 static mac_client_impl_t * 6499 mac_get_grp_primary(mac_group_t *grp) 6500 { 6501 mac_grp_client_t *mgcp = grp->mrg_clients; 6502 mac_client_impl_t *mcip; 6503 6504 while (mgcp != NULL) { 6505 mcip = mgcp->mgc_client; 6506 if (mcip->mci_flent->fe_type & FLOW_PRIMARY_MAC) 6507 return (mcip); 6508 mgcp = mgcp->mgc_next; 6509 } 6510 return (NULL); 6511 } 6512 6513 /* 6514 * Hybrid I/O specifies the ring that should be given to a share. 6515 * If the ring is already used by clients, then we need to release 6516 * the ring back to the default group so that we can give it to 6517 * the share. This means the clients using this ring now get a 6518 * replacement ring. If there aren't any replacement rings, this 6519 * function returns a failure. 6520 */ 6521 static int 6522 mac_reclaim_ring_from_grp(mac_impl_t *mip, mac_ring_type_t ring_type, 6523 mac_ring_t *ring, mac_ring_t **rings, int nrings) 6524 { 6525 mac_group_t *group = (mac_group_t *)ring->mr_gh; 6526 mac_resource_props_t *mrp; 6527 mac_client_impl_t *mcip; 6528 mac_group_t *defgrp; 6529 mac_ring_t *tring; 6530 mac_group_t *tgrp; 6531 int i; 6532 int j; 6533 6534 mcip = MAC_GROUP_ONLY_CLIENT(group); 6535 if (mcip == NULL) 6536 mcip = mac_get_grp_primary(group); 6537 ASSERT(mcip != NULL); 6538 ASSERT(mcip->mci_share == 0); 6539 6540 mrp = MCIP_RESOURCE_PROPS(mcip); 6541 if (ring_type == MAC_RING_TYPE_RX) { 6542 defgrp = mip->mi_rx_donor_grp; 6543 if ((mrp->mrp_mask & MRP_RX_RINGS) == 0) { 6544 /* Need to put this mac client in the default group */ 6545 if (mac_rx_switch_group(mcip, group, defgrp) != 0) 6546 return (ENOSPC); 6547 } else { 6548 /* 6549 * Switch this ring with some other ring from 6550 * the default group. 6551 */ 6552 for (tring = defgrp->mrg_rings; tring != NULL; 6553 tring = tring->mr_next) { 6554 if (tring->mr_index == 0) 6555 continue; 6556 for (j = 0; j < nrings; j++) { 6557 if (rings[j] == tring) 6558 break; 6559 } 6560 if (j >= nrings) 6561 break; 6562 } 6563 if (tring == NULL) 6564 return (ENOSPC); 6565 if (mac_group_mov_ring(mip, group, tring) != 0) 6566 return (ENOSPC); 6567 if (mac_group_mov_ring(mip, defgrp, ring) != 0) { 6568 (void) mac_group_mov_ring(mip, defgrp, tring); 6569 return (ENOSPC); 6570 } 6571 } 6572 ASSERT(ring->mr_gh == (mac_group_handle_t)defgrp); 6573 return (0); 6574 } 6575 6576 defgrp = MAC_DEFAULT_TX_GROUP(mip); 6577 if (ring == (mac_ring_t *)mip->mi_default_tx_ring) { 6578 /* 6579 * See if we can get a spare ring to replace the default 6580 * ring. 6581 */ 6582 if (defgrp->mrg_cur_count == 1) { 6583 /* 6584 * Need to get a ring from another client, see if 6585 * there are any clients that can be moved to 6586 * the default group, thereby freeing some rings. 6587 */ 6588 for (i = 0; i < mip->mi_tx_group_count; i++) { 6589 tgrp = &mip->mi_tx_groups[i]; 6590 if (tgrp->mrg_state == 6591 MAC_GROUP_STATE_REGISTERED) { 6592 continue; 6593 } 6594 mcip = MAC_GROUP_ONLY_CLIENT(tgrp); 6595 if (mcip == NULL) 6596 mcip = mac_get_grp_primary(tgrp); 6597 ASSERT(mcip != NULL); 6598 mrp = MCIP_RESOURCE_PROPS(mcip); 6599 if ((mrp->mrp_mask & MRP_TX_RINGS) == 0) { 6600 ASSERT(tgrp->mrg_cur_count == 1); 6601 /* 6602 * If this ring is part of the 6603 * rings asked by the share we cannot 6604 * use it as the default ring. 6605 */ 6606 for (j = 0; j < nrings; j++) { 6607 if (rings[j] == tgrp->mrg_rings) 6608 break; 6609 } 6610 if (j < nrings) 6611 continue; 6612 mac_tx_client_quiesce( 6613 (mac_client_handle_t)mcip); 6614 mac_tx_switch_group(mcip, tgrp, 6615 defgrp); 6616 mac_tx_client_restart( 6617 (mac_client_handle_t)mcip); 6618 break; 6619 } 6620 } 6621 /* 6622 * All the rings are reserved, can't give up the 6623 * default ring. 6624 */ 6625 if (defgrp->mrg_cur_count <= 1) 6626 return (ENOSPC); 6627 } 6628 /* 6629 * Swap the default ring with another. 6630 */ 6631 for (tring = defgrp->mrg_rings; tring != NULL; 6632 tring = tring->mr_next) { 6633 /* 6634 * If this ring is part of the rings asked by the 6635 * share we cannot use it as the default ring. 6636 */ 6637 for (j = 0; j < nrings; j++) { 6638 if (rings[j] == tring) 6639 break; 6640 } 6641 if (j >= nrings) 6642 break; 6643 } 6644 ASSERT(tring != NULL); 6645 mip->mi_default_tx_ring = (mac_ring_handle_t)tring; 6646 return (0); 6647 } 6648 /* 6649 * The Tx ring is with a group reserved by a MAC client. See if 6650 * we can swap it. 6651 */ 6652 ASSERT(group->mrg_state == MAC_GROUP_STATE_RESERVED); 6653 mcip = MAC_GROUP_ONLY_CLIENT(group); 6654 if (mcip == NULL) 6655 mcip = mac_get_grp_primary(group); 6656 ASSERT(mcip != NULL); 6657 mrp = MCIP_RESOURCE_PROPS(mcip); 6658 mac_tx_client_quiesce((mac_client_handle_t)mcip); 6659 if ((mrp->mrp_mask & MRP_TX_RINGS) == 0) { 6660 ASSERT(group->mrg_cur_count == 1); 6661 /* Put this mac client in the default group */ 6662 mac_tx_switch_group(mcip, group, defgrp); 6663 } else { 6664 /* 6665 * Switch this ring with some other ring from 6666 * the default group. 6667 */ 6668 for (tring = defgrp->mrg_rings; tring != NULL; 6669 tring = tring->mr_next) { 6670 if (tring == (mac_ring_t *)mip->mi_default_tx_ring) 6671 continue; 6672 /* 6673 * If this ring is part of the rings asked by the 6674 * share we cannot use it for swapping. 6675 */ 6676 for (j = 0; j < nrings; j++) { 6677 if (rings[j] == tring) 6678 break; 6679 } 6680 if (j >= nrings) 6681 break; 6682 } 6683 if (tring == NULL) { 6684 mac_tx_client_restart((mac_client_handle_t)mcip); 6685 return (ENOSPC); 6686 } 6687 if (mac_group_mov_ring(mip, group, tring) != 0) { 6688 mac_tx_client_restart((mac_client_handle_t)mcip); 6689 return (ENOSPC); 6690 } 6691 if (mac_group_mov_ring(mip, defgrp, ring) != 0) { 6692 (void) mac_group_mov_ring(mip, defgrp, tring); 6693 mac_tx_client_restart((mac_client_handle_t)mcip); 6694 return (ENOSPC); 6695 } 6696 } 6697 mac_tx_client_restart((mac_client_handle_t)mcip); 6698 ASSERT(ring->mr_gh == (mac_group_handle_t)defgrp); 6699 return (0); 6700 } 6701 6702 /* 6703 * Populate a zero-ring group with rings. If the share is non-NULL, 6704 * the rings are chosen according to that share. 6705 * Invoked after allocating a new RX or TX group through 6706 * mac_reserve_rx_group() or mac_reserve_tx_group(), respectively. 6707 * Returns zero on success, an errno otherwise. 6708 */ 6709 int 6710 i_mac_group_allocate_rings(mac_impl_t *mip, mac_ring_type_t ring_type, 6711 mac_group_t *src_group, mac_group_t *new_group, mac_share_handle_t share, 6712 uint32_t ringcnt) 6713 { 6714 mac_ring_t **rings, *ring; 6715 uint_t nrings; 6716 int rv = 0, i = 0, j; 6717 6718 ASSERT((ring_type == MAC_RING_TYPE_RX && 6719 mip->mi_rx_group_type == MAC_GROUP_TYPE_DYNAMIC) || 6720 (ring_type == MAC_RING_TYPE_TX && 6721 mip->mi_tx_group_type == MAC_GROUP_TYPE_DYNAMIC)); 6722 6723 /* 6724 * First find the rings to allocate to the group. 6725 */ 6726 if (share != 0) { 6727 /* get rings through ms_squery() */ 6728 mip->mi_share_capab.ms_squery(share, ring_type, NULL, &nrings); 6729 ASSERT(nrings != 0); 6730 rings = kmem_alloc(nrings * sizeof (mac_ring_handle_t), 6731 KM_SLEEP); 6732 mip->mi_share_capab.ms_squery(share, ring_type, 6733 (mac_ring_handle_t *)rings, &nrings); 6734 for (i = 0; i < nrings; i++) { 6735 /* 6736 * If we have given this ring to a non-default 6737 * group, we need to check if we can get this 6738 * ring. 6739 */ 6740 ring = rings[i]; 6741 if (ring->mr_gh != (mac_group_handle_t)src_group || 6742 ring == (mac_ring_t *)mip->mi_default_tx_ring) { 6743 if (mac_reclaim_ring_from_grp(mip, ring_type, 6744 ring, rings, nrings) != 0) { 6745 rv = ENOSPC; 6746 goto bail; 6747 } 6748 } 6749 } 6750 } else { 6751 /* 6752 * Pick one ring from default group. 6753 * 6754 * for now pick the second ring which requires the first ring 6755 * at index 0 to stay in the default group, since it is the 6756 * ring which carries the multicast traffic. 6757 * We need a better way for a driver to indicate this, 6758 * for example a per-ring flag. 6759 */ 6760 rings = kmem_alloc(ringcnt * sizeof (mac_ring_handle_t), 6761 KM_SLEEP); 6762 for (ring = src_group->mrg_rings; ring != NULL; 6763 ring = ring->mr_next) { 6764 if (ring_type == MAC_RING_TYPE_RX && 6765 ring->mr_index == 0) { 6766 continue; 6767 } 6768 if (ring_type == MAC_RING_TYPE_TX && 6769 ring == (mac_ring_t *)mip->mi_default_tx_ring) { 6770 continue; 6771 } 6772 rings[i++] = ring; 6773 if (i == ringcnt) 6774 break; 6775 } 6776 ASSERT(ring != NULL); 6777 nrings = i; 6778 /* Not enough rings as required */ 6779 if (nrings != ringcnt) { 6780 rv = ENOSPC; 6781 goto bail; 6782 } 6783 } 6784 6785 switch (ring_type) { 6786 case MAC_RING_TYPE_RX: 6787 if (src_group->mrg_cur_count - nrings < 1) { 6788 /* we ran out of rings */ 6789 rv = ENOSPC; 6790 goto bail; 6791 } 6792 6793 /* move receive rings to new group */ 6794 for (i = 0; i < nrings; i++) { 6795 rv = mac_group_mov_ring(mip, new_group, rings[i]); 6796 if (rv != 0) { 6797 /* move rings back on failure */ 6798 for (j = 0; j < i; j++) { 6799 (void) mac_group_mov_ring(mip, 6800 src_group, rings[j]); 6801 } 6802 goto bail; 6803 } 6804 } 6805 break; 6806 6807 case MAC_RING_TYPE_TX: { 6808 mac_ring_t *tmp_ring; 6809 6810 /* move the TX rings to the new group */ 6811 for (i = 0; i < nrings; i++) { 6812 /* get the desired ring */ 6813 tmp_ring = mac_reserve_tx_ring(mip, rings[i]); 6814 if (tmp_ring == NULL) { 6815 rv = ENOSPC; 6816 goto bail; 6817 } 6818 ASSERT(tmp_ring == rings[i]); 6819 rv = mac_group_mov_ring(mip, new_group, rings[i]); 6820 if (rv != 0) { 6821 /* cleanup on failure */ 6822 for (j = 0; j < i; j++) { 6823 (void) mac_group_mov_ring(mip, 6824 MAC_DEFAULT_TX_GROUP(mip), 6825 rings[j]); 6826 } 6827 goto bail; 6828 } 6829 } 6830 break; 6831 } 6832 } 6833 6834 /* add group to share */ 6835 if (share != 0) 6836 mip->mi_share_capab.ms_sadd(share, new_group->mrg_driver); 6837 6838 bail: 6839 /* free temporary array of rings */ 6840 kmem_free(rings, nrings * sizeof (mac_ring_handle_t)); 6841 6842 return (rv); 6843 } 6844 6845 void 6846 mac_group_add_client(mac_group_t *grp, mac_client_impl_t *mcip) 6847 { 6848 mac_grp_client_t *mgcp; 6849 6850 for (mgcp = grp->mrg_clients; mgcp != NULL; mgcp = mgcp->mgc_next) { 6851 if (mgcp->mgc_client == mcip) 6852 break; 6853 } 6854 6855 ASSERT(mgcp == NULL); 6856 6857 mgcp = kmem_zalloc(sizeof (mac_grp_client_t), KM_SLEEP); 6858 mgcp->mgc_client = mcip; 6859 mgcp->mgc_next = grp->mrg_clients; 6860 grp->mrg_clients = mgcp; 6861 } 6862 6863 void 6864 mac_group_remove_client(mac_group_t *grp, mac_client_impl_t *mcip) 6865 { 6866 mac_grp_client_t *mgcp, **pprev; 6867 6868 for (pprev = &grp->mrg_clients, mgcp = *pprev; mgcp != NULL; 6869 pprev = &mgcp->mgc_next, mgcp = *pprev) { 6870 if (mgcp->mgc_client == mcip) 6871 break; 6872 } 6873 6874 ASSERT(mgcp != NULL); 6875 6876 *pprev = mgcp->mgc_next; 6877 kmem_free(mgcp, sizeof (mac_grp_client_t)); 6878 } 6879 6880 /* 6881 * Return true if any client on this group explicitly asked for HW 6882 * rings (of type mask) or have a bound share. 6883 */ 6884 static boolean_t 6885 i_mac_clients_hw(mac_group_t *grp, uint32_t mask) 6886 { 6887 mac_grp_client_t *mgcip; 6888 mac_client_impl_t *mcip; 6889 mac_resource_props_t *mrp; 6890 6891 for (mgcip = grp->mrg_clients; mgcip != NULL; mgcip = mgcip->mgc_next) { 6892 mcip = mgcip->mgc_client; 6893 mrp = MCIP_RESOURCE_PROPS(mcip); 6894 if (mcip->mci_share != 0 || (mrp->mrp_mask & mask) != 0) 6895 return (B_TRUE); 6896 } 6897 6898 return (B_FALSE); 6899 } 6900 6901 /* 6902 * Finds an available group and exclusively reserves it for a client. 6903 * The group is chosen to suit the flow's resource controls (bandwidth and 6904 * fanout requirements) and the address type. 6905 * If the requestor is the pimary MAC then return the group with the 6906 * largest number of rings, otherwise the default ring when available. 6907 */ 6908 mac_group_t * 6909 mac_reserve_rx_group(mac_client_impl_t *mcip, uint8_t *mac_addr, boolean_t move) 6910 { 6911 mac_share_handle_t share = mcip->mci_share; 6912 mac_impl_t *mip = mcip->mci_mip; 6913 mac_group_t *grp = NULL; 6914 int i; 6915 int err = 0; 6916 mac_address_t *map; 6917 mac_resource_props_t *mrp = MCIP_RESOURCE_PROPS(mcip); 6918 int nrings; 6919 int donor_grp_rcnt; 6920 boolean_t need_exclgrp = B_FALSE; 6921 int need_rings = 0; 6922 mac_group_t *candidate_grp = NULL; 6923 mac_client_impl_t *gclient; 6924 mac_group_t *donorgrp = NULL; 6925 boolean_t rxhw = mrp->mrp_mask & MRP_RX_RINGS; 6926 boolean_t unspec = mrp->mrp_mask & MRP_RXRINGS_UNSPEC; 6927 boolean_t isprimary; 6928 6929 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip)); 6930 6931 isprimary = mcip->mci_flent->fe_type & FLOW_PRIMARY_MAC; 6932 6933 /* 6934 * Check if a group already has this MAC address (case of VLANs) 6935 * unless we are moving this MAC client from one group to another. 6936 */ 6937 if (!move && (map = mac_find_macaddr(mip, mac_addr)) != NULL) { 6938 if (map->ma_group != NULL) 6939 return (map->ma_group); 6940 } 6941 6942 if (mip->mi_rx_groups == NULL || mip->mi_rx_group_count == 0) 6943 return (NULL); 6944 6945 /* 6946 * If this client is requesting exclusive MAC access then 6947 * return NULL to ensure the client uses the default group. 6948 */ 6949 if (mcip->mci_state_flags & MCIS_EXCLUSIVE) 6950 return (NULL); 6951 6952 /* For dynamic groups default unspecified to 1 */ 6953 if (rxhw && unspec && 6954 mip->mi_rx_group_type == MAC_GROUP_TYPE_DYNAMIC) { 6955 mrp->mrp_nrxrings = 1; 6956 } 6957 6958 /* 6959 * For static grouping we allow only specifying rings=0 and 6960 * unspecified 6961 */ 6962 if (rxhw && mrp->mrp_nrxrings > 0 && 6963 mip->mi_rx_group_type == MAC_GROUP_TYPE_STATIC) { 6964 return (NULL); 6965 } 6966 6967 if (rxhw) { 6968 /* 6969 * We have explicitly asked for a group (with nrxrings, 6970 * if unspec). 6971 */ 6972 if (unspec || mrp->mrp_nrxrings > 0) { 6973 need_exclgrp = B_TRUE; 6974 need_rings = mrp->mrp_nrxrings; 6975 } else if (mrp->mrp_nrxrings == 0) { 6976 /* 6977 * We have asked for a software group. 6978 */ 6979 return (NULL); 6980 } 6981 } else if (isprimary && mip->mi_nactiveclients == 1 && 6982 mip->mi_rx_group_type == MAC_GROUP_TYPE_DYNAMIC) { 6983 /* 6984 * If the primary is the only active client on this 6985 * mip and we have not asked for any rings, we give 6986 * it the default group so that the primary gets to 6987 * use all the rings. 6988 */ 6989 return (NULL); 6990 } 6991 6992 /* The group that can donate rings */ 6993 donorgrp = mip->mi_rx_donor_grp; 6994 6995 /* 6996 * The number of rings that the default group can donate. 6997 * We need to leave at least one ring. 6998 */ 6999 donor_grp_rcnt = donorgrp->mrg_cur_count - 1; 7000 7001 /* 7002 * Try to exclusively reserve a RX group. 7003 * 7004 * For flows requiring HW_DEFAULT_RING (unicast flow of the primary 7005 * client), try to reserve the a non-default RX group and give 7006 * it all the rings from the donor group, except the default ring 7007 * 7008 * For flows requiring HW_RING (unicast flow of other clients), try 7009 * to reserve non-default RX group with the specified number of 7010 * rings, if available. 7011 * 7012 * For flows that have not asked for software or hardware ring, 7013 * try to reserve a non-default group with 1 ring, if available. 7014 */ 7015 for (i = 1; i < mip->mi_rx_group_count; i++) { 7016 grp = &mip->mi_rx_groups[i]; 7017 7018 DTRACE_PROBE3(rx__group__trying, char *, mip->mi_name, 7019 int, grp->mrg_index, mac_group_state_t, grp->mrg_state); 7020 7021 /* 7022 * Check if this group could be a candidate group for 7023 * eviction if we need a group for this MAC client, 7024 * but there aren't any. A candidate group is one 7025 * that didn't ask for an exclusive group, but got 7026 * one and it has enough rings (combined with what 7027 * the donor group can donate) for the new MAC 7028 * client. 7029 */ 7030 if (grp->mrg_state >= MAC_GROUP_STATE_RESERVED) { 7031 /* 7032 * If the donor group is not the default 7033 * group, don't bother looking for a candidate 7034 * group. If we don't have enough rings we 7035 * will check if the primary group can be 7036 * vacated. 7037 */ 7038 if (candidate_grp == NULL && 7039 donorgrp == MAC_DEFAULT_RX_GROUP(mip)) { 7040 if (!i_mac_clients_hw(grp, MRP_RX_RINGS) && 7041 (unspec || 7042 (grp->mrg_cur_count + donor_grp_rcnt >= 7043 need_rings))) { 7044 candidate_grp = grp; 7045 } 7046 } 7047 continue; 7048 } 7049 /* 7050 * This group could already be SHARED by other multicast 7051 * flows on this client. In that case, the group would 7052 * be shared and has already been started. 7053 */ 7054 ASSERT(grp->mrg_state != MAC_GROUP_STATE_UNINIT); 7055 7056 if ((grp->mrg_state == MAC_GROUP_STATE_REGISTERED) && 7057 (mac_start_group(grp) != 0)) { 7058 continue; 7059 } 7060 7061 if (mip->mi_rx_group_type != MAC_GROUP_TYPE_DYNAMIC) 7062 break; 7063 ASSERT(grp->mrg_cur_count == 0); 7064 7065 /* 7066 * Populate the group. Rings should be taken 7067 * from the donor group. 7068 */ 7069 nrings = rxhw ? need_rings : isprimary ? donor_grp_rcnt: 1; 7070 7071 /* 7072 * If the donor group can't donate, let's just walk and 7073 * see if someone can vacate a group, so that we have 7074 * enough rings for this, unless we already have 7075 * identified a candiate group.. 7076 */ 7077 if (nrings <= donor_grp_rcnt) { 7078 err = i_mac_group_allocate_rings(mip, MAC_RING_TYPE_RX, 7079 donorgrp, grp, share, nrings); 7080 if (err == 0) { 7081 /* 7082 * For a share i_mac_group_allocate_rings gets 7083 * the rings from the driver, let's populate 7084 * the property for the client now. 7085 */ 7086 if (share != 0) { 7087 mac_client_set_rings( 7088 (mac_client_handle_t)mcip, 7089 grp->mrg_cur_count, -1); 7090 } 7091 if (mac_is_primary_client(mcip) && !rxhw) 7092 mip->mi_rx_donor_grp = grp; 7093 break; 7094 } 7095 } 7096 7097 DTRACE_PROBE3(rx__group__reserve__alloc__rings, char *, 7098 mip->mi_name, int, grp->mrg_index, int, err); 7099 7100 /* 7101 * It's a dynamic group but the grouping operation 7102 * failed. 7103 */ 7104 mac_stop_group(grp); 7105 } 7106 7107 /* We didn't find an exclusive group for this MAC client */ 7108 if (i >= mip->mi_rx_group_count) { 7109 7110 if (!need_exclgrp) 7111 return (NULL); 7112 7113 /* 7114 * If we found a candidate group then move the 7115 * existing MAC client from the candidate_group to the 7116 * default group and give the candidate_group to the 7117 * new MAC client. If we didn't find a candidate 7118 * group, then check if the primary is in its own 7119 * group and if it can make way for this MAC client. 7120 */ 7121 if (candidate_grp == NULL && 7122 donorgrp != MAC_DEFAULT_RX_GROUP(mip) && 7123 donorgrp->mrg_cur_count >= need_rings) { 7124 candidate_grp = donorgrp; 7125 } 7126 if (candidate_grp != NULL) { 7127 boolean_t prim_grp = B_FALSE; 7128 7129 /* 7130 * Switch the existing MAC client from the 7131 * candidate group to the default group. If 7132 * the candidate group is the donor group, 7133 * then after the switch we need to update the 7134 * donor group too. 7135 */ 7136 grp = candidate_grp; 7137 gclient = grp->mrg_clients->mgc_client; 7138 VERIFY3P(gclient, !=, NULL); 7139 if (grp == mip->mi_rx_donor_grp) 7140 prim_grp = B_TRUE; 7141 if (mac_rx_switch_group(gclient, grp, 7142 MAC_DEFAULT_RX_GROUP(mip)) != 0) { 7143 return (NULL); 7144 } 7145 if (prim_grp) { 7146 mip->mi_rx_donor_grp = 7147 MAC_DEFAULT_RX_GROUP(mip); 7148 donorgrp = MAC_DEFAULT_RX_GROUP(mip); 7149 } 7150 7151 /* 7152 * Now give this group with the required rings 7153 * to this MAC client. 7154 */ 7155 ASSERT(grp->mrg_state == MAC_GROUP_STATE_REGISTERED); 7156 if (mac_start_group(grp) != 0) 7157 return (NULL); 7158 7159 if (mip->mi_rx_group_type != MAC_GROUP_TYPE_DYNAMIC) 7160 return (grp); 7161 7162 donor_grp_rcnt = donorgrp->mrg_cur_count - 1; 7163 ASSERT(grp->mrg_cur_count == 0); 7164 ASSERT(donor_grp_rcnt >= need_rings); 7165 err = i_mac_group_allocate_rings(mip, MAC_RING_TYPE_RX, 7166 donorgrp, grp, share, need_rings); 7167 if (err == 0) { 7168 /* 7169 * For a share i_mac_group_allocate_rings gets 7170 * the rings from the driver, let's populate 7171 * the property for the client now. 7172 */ 7173 if (share != 0) { 7174 mac_client_set_rings( 7175 (mac_client_handle_t)mcip, 7176 grp->mrg_cur_count, -1); 7177 } 7178 DTRACE_PROBE2(rx__group__reserved, 7179 char *, mip->mi_name, int, grp->mrg_index); 7180 return (grp); 7181 } 7182 DTRACE_PROBE3(rx__group__reserve__alloc__rings, char *, 7183 mip->mi_name, int, grp->mrg_index, int, err); 7184 mac_stop_group(grp); 7185 } 7186 return (NULL); 7187 } 7188 ASSERT(grp != NULL); 7189 7190 DTRACE_PROBE2(rx__group__reserved, 7191 char *, mip->mi_name, int, grp->mrg_index); 7192 return (grp); 7193 } 7194 7195 /* 7196 * mac_rx_release_group() 7197 * 7198 * Release the group when it has no remaining clients. The group is 7199 * stopped and its shares are removed and all rings are assigned back 7200 * to default group. This should never be called against the default 7201 * group. 7202 */ 7203 void 7204 mac_release_rx_group(mac_client_impl_t *mcip, mac_group_t *group) 7205 { 7206 mac_impl_t *mip = mcip->mci_mip; 7207 mac_ring_t *ring; 7208 7209 ASSERT(group != MAC_DEFAULT_RX_GROUP(mip)); 7210 ASSERT(MAC_GROUP_NO_CLIENT(group) == B_TRUE); 7211 7212 if (mip->mi_rx_donor_grp == group) 7213 mip->mi_rx_donor_grp = MAC_DEFAULT_RX_GROUP(mip); 7214 7215 /* 7216 * This is the case where there are no clients left. Any 7217 * SRS etc on this group have also be quiesced. 7218 */ 7219 for (ring = group->mrg_rings; ring != NULL; ring = ring->mr_next) { 7220 if (ring->mr_classify_type == MAC_HW_CLASSIFIER) { 7221 ASSERT(group->mrg_state == MAC_GROUP_STATE_RESERVED); 7222 /* 7223 * Remove the SRS associated with the HW ring. 7224 * As a result, polling will be disabled. 7225 */ 7226 ring->mr_srs = NULL; 7227 } 7228 ASSERT(group->mrg_state < MAC_GROUP_STATE_RESERVED || 7229 ring->mr_state == MR_INUSE); 7230 if (ring->mr_state == MR_INUSE) { 7231 mac_stop_ring(ring); 7232 ring->mr_flag = 0; 7233 } 7234 } 7235 7236 /* remove group from share */ 7237 if (mcip->mci_share != 0) { 7238 mip->mi_share_capab.ms_sremove(mcip->mci_share, 7239 group->mrg_driver); 7240 } 7241 7242 if (mip->mi_rx_group_type == MAC_GROUP_TYPE_DYNAMIC) { 7243 mac_ring_t *ring; 7244 7245 /* 7246 * Rings were dynamically allocated to group. 7247 * Move rings back to default group. 7248 */ 7249 while ((ring = group->mrg_rings) != NULL) { 7250 (void) mac_group_mov_ring(mip, mip->mi_rx_donor_grp, 7251 ring); 7252 } 7253 } 7254 mac_stop_group(group); 7255 /* 7256 * Possible improvement: See if we can assign the group just released 7257 * to a another client of the mip 7258 */ 7259 } 7260 7261 /* 7262 * Move the MAC address from fgrp to tgrp. 7263 */ 7264 static int 7265 mac_rx_move_macaddr(mac_client_impl_t *mcip, mac_group_t *fgrp, 7266 mac_group_t *tgrp) 7267 { 7268 mac_impl_t *mip = mcip->mci_mip; 7269 uint8_t maddr[MAXMACADDRLEN]; 7270 int err = 0; 7271 uint16_t vid; 7272 mac_unicast_impl_t *muip; 7273 boolean_t use_hw; 7274 7275 mac_rx_client_quiesce((mac_client_handle_t)mcip); 7276 VERIFY3P(mcip->mci_unicast, !=, NULL); 7277 bcopy(mcip->mci_unicast->ma_addr, maddr, mcip->mci_unicast->ma_len); 7278 7279 /* 7280 * Does the client require MAC address hardware classifiction? 7281 */ 7282 use_hw = (mcip->mci_state_flags & MCIS_UNICAST_HW) != 0; 7283 vid = i_mac_flow_vid(mcip->mci_flent); 7284 7285 /* 7286 * You can never move an address that is shared by multiple 7287 * clients. mac_datapath_setup() ensures that clients sharing 7288 * an address are placed on the default group. This guarantees 7289 * that a non-default group will only ever have one client and 7290 * thus make full use of HW filters. 7291 */ 7292 if (mac_check_macaddr_shared(mcip->mci_unicast)) 7293 return (EINVAL); 7294 7295 err = mac_remove_macaddr_vlan(mcip->mci_unicast, vid); 7296 7297 if (err != 0) { 7298 mac_rx_client_restart((mac_client_handle_t)mcip); 7299 return (err); 7300 } 7301 7302 /* 7303 * If this isn't the primary MAC address then the 7304 * mac_address_t has been freed by the last call to 7305 * mac_remove_macaddr_vlan(). In any case, NULL the reference 7306 * to avoid a dangling pointer. 7307 */ 7308 mcip->mci_unicast = NULL; 7309 7310 /* 7311 * We also have to NULL all the mui_map references -- sun4v 7312 * strikes again! 7313 */ 7314 rw_enter(&mcip->mci_rw_lock, RW_WRITER); 7315 for (muip = mcip->mci_unicast_list; muip != NULL; muip = muip->mui_next) 7316 muip->mui_map = NULL; 7317 rw_exit(&mcip->mci_rw_lock); 7318 7319 /* 7320 * Program the H/W Classifier first, if this fails we need not 7321 * proceed with the other stuff. 7322 */ 7323 if ((err = mac_add_macaddr_vlan(mip, tgrp, maddr, vid, use_hw)) != 0) { 7324 int err2; 7325 7326 /* Revert back the H/W Classifier */ 7327 err2 = mac_add_macaddr_vlan(mip, fgrp, maddr, vid, use_hw); 7328 7329 if (err2 != 0) { 7330 cmn_err(CE_WARN, "Failed to revert HW classification" 7331 " on MAC %s, for client %s: %d.", mip->mi_name, 7332 mcip->mci_name, err2); 7333 } 7334 7335 mac_rx_client_restart((mac_client_handle_t)mcip); 7336 return (err); 7337 } 7338 7339 /* 7340 * Get a reference to the new mac_address_t and update the 7341 * client's reference. Then restart the client and add the 7342 * other clients of this MAC addr (if they exsit). 7343 */ 7344 mcip->mci_unicast = mac_find_macaddr(mip, maddr); 7345 rw_enter(&mcip->mci_rw_lock, RW_WRITER); 7346 for (muip = mcip->mci_unicast_list; muip != NULL; muip = muip->mui_next) 7347 muip->mui_map = mcip->mci_unicast; 7348 rw_exit(&mcip->mci_rw_lock); 7349 mac_rx_client_restart((mac_client_handle_t)mcip); 7350 return (0); 7351 } 7352 7353 /* 7354 * Switch the MAC client from one group to another. This means we need 7355 * to remove the MAC address from the group, remove the MAC client, 7356 * teardown the SRSs and revert the group state. Then, we add the client 7357 * to the destination group, set the SRSs, and add the MAC address to the 7358 * group. 7359 */ 7360 int 7361 mac_rx_switch_group(mac_client_impl_t *mcip, mac_group_t *fgrp, 7362 mac_group_t *tgrp) 7363 { 7364 int err; 7365 mac_group_state_t next_state; 7366 mac_client_impl_t *group_only_mcip; 7367 mac_client_impl_t *gmcip; 7368 mac_impl_t *mip = mcip->mci_mip; 7369 mac_grp_client_t *mgcp; 7370 7371 VERIFY3P(fgrp, ==, mcip->mci_flent->fe_rx_ring_group); 7372 7373 if ((err = mac_rx_move_macaddr(mcip, fgrp, tgrp)) != 0) 7374 return (err); 7375 7376 /* 7377 * If the group is marked as reserved and in use by a single 7378 * client, then there is an SRS to teardown. 7379 */ 7380 if (fgrp->mrg_state == MAC_GROUP_STATE_RESERVED && 7381 MAC_GROUP_ONLY_CLIENT(fgrp) != NULL) { 7382 mac_rx_srs_group_teardown(mcip->mci_flent, B_TRUE); 7383 } 7384 7385 /* 7386 * If we are moving the client from a non-default group, then 7387 * we know that any additional clients on this group share the 7388 * same MAC address. Since we moved the MAC address filter, we 7389 * need to move these clients too. 7390 * 7391 * If we are moving the client from the default group and its 7392 * MAC address has VLAN clients, then we must move those 7393 * clients as well. 7394 * 7395 * In both cases the idea is the same: we moved the MAC 7396 * address filter to the tgrp, so we must move all clients 7397 * using that MAC address to tgrp as well. 7398 */ 7399 if (fgrp != MAC_DEFAULT_RX_GROUP(mip)) { 7400 mgcp = fgrp->mrg_clients; 7401 while (mgcp != NULL) { 7402 gmcip = mgcp->mgc_client; 7403 mgcp = mgcp->mgc_next; 7404 mac_group_remove_client(fgrp, gmcip); 7405 mac_group_add_client(tgrp, gmcip); 7406 gmcip->mci_flent->fe_rx_ring_group = tgrp; 7407 } 7408 mac_release_rx_group(mcip, fgrp); 7409 VERIFY3B(MAC_GROUP_NO_CLIENT(fgrp), ==, B_TRUE); 7410 mac_set_group_state(fgrp, MAC_GROUP_STATE_REGISTERED); 7411 } else { 7412 mac_group_remove_client(fgrp, mcip); 7413 mac_group_add_client(tgrp, mcip); 7414 mcip->mci_flent->fe_rx_ring_group = tgrp; 7415 7416 /* 7417 * If there are other clients (VLANs) sharing this address 7418 * then move them too. 7419 */ 7420 if (mac_check_macaddr_shared(mcip->mci_unicast)) { 7421 /* 7422 * We need to move all the clients that are using 7423 * this MAC address. 7424 */ 7425 mgcp = fgrp->mrg_clients; 7426 while (mgcp != NULL) { 7427 gmcip = mgcp->mgc_client; 7428 mgcp = mgcp->mgc_next; 7429 if (mcip->mci_unicast == gmcip->mci_unicast) { 7430 mac_group_remove_client(fgrp, gmcip); 7431 mac_group_add_client(tgrp, gmcip); 7432 gmcip->mci_flent->fe_rx_ring_group = 7433 tgrp; 7434 } 7435 } 7436 } 7437 7438 /* 7439 * The default group still handles multicast and 7440 * broadcast traffic; it won't transition to 7441 * MAC_GROUP_STATE_REGISTERED. 7442 */ 7443 if (fgrp->mrg_state == MAC_GROUP_STATE_RESERVED) 7444 mac_rx_group_unmark(fgrp, MR_CONDEMNED); 7445 mac_set_group_state(fgrp, MAC_GROUP_STATE_SHARED); 7446 } 7447 7448 next_state = mac_group_next_state(tgrp, &group_only_mcip, 7449 MAC_DEFAULT_RX_GROUP(mip), B_TRUE); 7450 mac_set_group_state(tgrp, next_state); 7451 7452 /* 7453 * If the destination group is reserved, then setup the SRSes. 7454 * Otherwise make sure to use SW classification. 7455 */ 7456 if (tgrp->mrg_state == MAC_GROUP_STATE_RESERVED) { 7457 mac_rx_srs_group_setup(mcip, mcip->mci_flent, SRST_LINK); 7458 mac_fanout_setup(mcip, mcip->mci_flent, 7459 MCIP_RESOURCE_PROPS(mcip), mac_rx_deliver, mcip, NULL, 7460 NULL); 7461 mac_rx_group_unmark(tgrp, MR_INCIPIENT); 7462 } else { 7463 mac_rx_switch_grp_to_sw(tgrp); 7464 } 7465 7466 return (0); 7467 } 7468 7469 /* 7470 * Reserves a TX group for the specified share. Invoked by mac_tx_srs_setup() 7471 * when a share was allocated to the client. 7472 */ 7473 mac_group_t * 7474 mac_reserve_tx_group(mac_client_impl_t *mcip, boolean_t move) 7475 { 7476 mac_impl_t *mip = mcip->mci_mip; 7477 mac_group_t *grp = NULL; 7478 int rv; 7479 int i; 7480 int err; 7481 mac_group_t *defgrp; 7482 mac_share_handle_t share = mcip->mci_share; 7483 mac_resource_props_t *mrp = MCIP_RESOURCE_PROPS(mcip); 7484 int nrings; 7485 int defnrings; 7486 boolean_t need_exclgrp = B_FALSE; 7487 int need_rings = 0; 7488 mac_group_t *candidate_grp = NULL; 7489 mac_client_impl_t *gclient; 7490 mac_resource_props_t *gmrp; 7491 boolean_t txhw = mrp->mrp_mask & MRP_TX_RINGS; 7492 boolean_t unspec = mrp->mrp_mask & MRP_TXRINGS_UNSPEC; 7493 boolean_t isprimary; 7494 7495 isprimary = mcip->mci_flent->fe_type & FLOW_PRIMARY_MAC; 7496 7497 /* 7498 * When we come here for a VLAN on the primary (dladm create-vlan), 7499 * we need to pair it along with the primary (to keep it consistent 7500 * with the RX side). So, we check if the primary is already assigned 7501 * to a group and return the group if so. The other way is also 7502 * true, i.e. the VLAN is already created and now we are plumbing 7503 * the primary. 7504 */ 7505 if (!move && isprimary) { 7506 for (gclient = mip->mi_clients_list; gclient != NULL; 7507 gclient = gclient->mci_client_next) { 7508 if (gclient->mci_flent->fe_type & FLOW_PRIMARY_MAC && 7509 gclient->mci_flent->fe_tx_ring_group != NULL) { 7510 return (gclient->mci_flent->fe_tx_ring_group); 7511 } 7512 } 7513 } 7514 7515 if (mip->mi_tx_groups == NULL || mip->mi_tx_group_count == 0) 7516 return (NULL); 7517 7518 /* For dynamic groups, default unspec to 1 */ 7519 if (txhw && unspec && 7520 mip->mi_tx_group_type == MAC_GROUP_TYPE_DYNAMIC) { 7521 mrp->mrp_ntxrings = 1; 7522 } 7523 /* 7524 * For static grouping we allow only specifying rings=0 and 7525 * unspecified 7526 */ 7527 if (txhw && mrp->mrp_ntxrings > 0 && 7528 mip->mi_tx_group_type == MAC_GROUP_TYPE_STATIC) { 7529 return (NULL); 7530 } 7531 7532 if (txhw) { 7533 /* 7534 * We have explicitly asked for a group (with ntxrings, 7535 * if unspec). 7536 */ 7537 if (unspec || mrp->mrp_ntxrings > 0) { 7538 need_exclgrp = B_TRUE; 7539 need_rings = mrp->mrp_ntxrings; 7540 } else if (mrp->mrp_ntxrings == 0) { 7541 /* 7542 * We have asked for a software group. 7543 */ 7544 return (NULL); 7545 } 7546 } 7547 defgrp = MAC_DEFAULT_TX_GROUP(mip); 7548 /* 7549 * The number of rings that the default group can donate. 7550 * We need to leave at least one ring - the default ring - in 7551 * this group. 7552 */ 7553 defnrings = defgrp->mrg_cur_count - 1; 7554 7555 /* 7556 * Primary gets default group unless explicitly told not 7557 * to (i.e. rings > 0). 7558 */ 7559 if (isprimary && !need_exclgrp) 7560 return (NULL); 7561 7562 nrings = (mrp->mrp_mask & MRP_TX_RINGS) != 0 ? mrp->mrp_ntxrings : 1; 7563 for (i = 0; i < mip->mi_tx_group_count; i++) { 7564 grp = &mip->mi_tx_groups[i]; 7565 if ((grp->mrg_state == MAC_GROUP_STATE_RESERVED) || 7566 (grp->mrg_state == MAC_GROUP_STATE_UNINIT)) { 7567 /* 7568 * Select a candidate for replacement if we don't 7569 * get an exclusive group. A candidate group is one 7570 * that didn't ask for an exclusive group, but got 7571 * one and it has enough rings (combined with what 7572 * the default group can donate) for the new MAC 7573 * client. 7574 */ 7575 if (grp->mrg_state == MAC_GROUP_STATE_RESERVED && 7576 candidate_grp == NULL) { 7577 gclient = MAC_GROUP_ONLY_CLIENT(grp); 7578 VERIFY3P(gclient, !=, NULL); 7579 gmrp = MCIP_RESOURCE_PROPS(gclient); 7580 if (gclient->mci_share == 0 && 7581 (gmrp->mrp_mask & MRP_TX_RINGS) == 0 && 7582 (unspec || 7583 (grp->mrg_cur_count + defnrings) >= 7584 need_rings)) { 7585 candidate_grp = grp; 7586 } 7587 } 7588 continue; 7589 } 7590 /* 7591 * If the default can't donate let's just walk and 7592 * see if someone can vacate a group, so that we have 7593 * enough rings for this. 7594 */ 7595 if (mip->mi_tx_group_type != MAC_GROUP_TYPE_DYNAMIC || 7596 nrings <= defnrings) { 7597 if (grp->mrg_state == MAC_GROUP_STATE_REGISTERED) { 7598 rv = mac_start_group(grp); 7599 ASSERT(rv == 0); 7600 } 7601 break; 7602 } 7603 } 7604 7605 /* The default group */ 7606 if (i >= mip->mi_tx_group_count) { 7607 /* 7608 * If we need an exclusive group and have identified a 7609 * candidate group we switch the MAC client from the 7610 * candidate group to the default group and give the 7611 * candidate group to this client. 7612 */ 7613 if (need_exclgrp && candidate_grp != NULL) { 7614 /* 7615 * Switch the MAC client from the candidate 7616 * group to the default group. We know the 7617 * candidate_grp came from a reserved group 7618 * and thus only has one client. 7619 */ 7620 grp = candidate_grp; 7621 gclient = MAC_GROUP_ONLY_CLIENT(grp); 7622 VERIFY3P(gclient, !=, NULL); 7623 mac_tx_client_quiesce((mac_client_handle_t)gclient); 7624 mac_tx_switch_group(gclient, grp, defgrp); 7625 mac_tx_client_restart((mac_client_handle_t)gclient); 7626 7627 /* 7628 * Give the candidate group with the specified number 7629 * of rings to this MAC client. 7630 */ 7631 ASSERT(grp->mrg_state == MAC_GROUP_STATE_REGISTERED); 7632 rv = mac_start_group(grp); 7633 ASSERT(rv == 0); 7634 7635 if (mip->mi_tx_group_type != MAC_GROUP_TYPE_DYNAMIC) 7636 return (grp); 7637 7638 ASSERT(grp->mrg_cur_count == 0); 7639 ASSERT(defgrp->mrg_cur_count > need_rings); 7640 7641 err = i_mac_group_allocate_rings(mip, MAC_RING_TYPE_TX, 7642 defgrp, grp, share, need_rings); 7643 if (err == 0) { 7644 /* 7645 * For a share i_mac_group_allocate_rings gets 7646 * the rings from the driver, let's populate 7647 * the property for the client now. 7648 */ 7649 if (share != 0) { 7650 mac_client_set_rings( 7651 (mac_client_handle_t)mcip, -1, 7652 grp->mrg_cur_count); 7653 } 7654 mip->mi_tx_group_free--; 7655 return (grp); 7656 } 7657 DTRACE_PROBE3(tx__group__reserve__alloc__rings, char *, 7658 mip->mi_name, int, grp->mrg_index, int, err); 7659 mac_stop_group(grp); 7660 } 7661 return (NULL); 7662 } 7663 /* 7664 * We got an exclusive group, but it is not dynamic. 7665 */ 7666 if (mip->mi_tx_group_type != MAC_GROUP_TYPE_DYNAMIC) { 7667 mip->mi_tx_group_free--; 7668 return (grp); 7669 } 7670 7671 rv = i_mac_group_allocate_rings(mip, MAC_RING_TYPE_TX, defgrp, grp, 7672 share, nrings); 7673 if (rv != 0) { 7674 DTRACE_PROBE3(tx__group__reserve__alloc__rings, 7675 char *, mip->mi_name, int, grp->mrg_index, int, rv); 7676 mac_stop_group(grp); 7677 return (NULL); 7678 } 7679 /* 7680 * For a share i_mac_group_allocate_rings gets the rings from the 7681 * driver, let's populate the property for the client now. 7682 */ 7683 if (share != 0) { 7684 mac_client_set_rings((mac_client_handle_t)mcip, -1, 7685 grp->mrg_cur_count); 7686 } 7687 mip->mi_tx_group_free--; 7688 return (grp); 7689 } 7690 7691 void 7692 mac_release_tx_group(mac_client_impl_t *mcip, mac_group_t *grp) 7693 { 7694 mac_impl_t *mip = mcip->mci_mip; 7695 mac_share_handle_t share = mcip->mci_share; 7696 mac_ring_t *ring; 7697 mac_soft_ring_set_t *srs = MCIP_TX_SRS(mcip); 7698 mac_group_t *defgrp; 7699 7700 defgrp = MAC_DEFAULT_TX_GROUP(mip); 7701 if (srs != NULL) { 7702 if (srs->srs_soft_ring_count > 0) { 7703 for (ring = grp->mrg_rings; ring != NULL; 7704 ring = ring->mr_next) { 7705 ASSERT(mac_tx_srs_ring_present(srs, ring)); 7706 mac_tx_invoke_callbacks(mcip, 7707 (mac_tx_cookie_t) 7708 mac_tx_srs_get_soft_ring(srs, ring)); 7709 mac_tx_srs_del_ring(srs, ring); 7710 } 7711 } else { 7712 ASSERT(srs->srs_tx.st_arg2 != NULL); 7713 srs->srs_tx.st_arg2 = NULL; 7714 mac_srs_stat_delete(srs); 7715 } 7716 } 7717 if (share != 0) 7718 mip->mi_share_capab.ms_sremove(share, grp->mrg_driver); 7719 7720 /* move the ring back to the pool */ 7721 if (mip->mi_tx_group_type == MAC_GROUP_TYPE_DYNAMIC) { 7722 while ((ring = grp->mrg_rings) != NULL) 7723 (void) mac_group_mov_ring(mip, defgrp, ring); 7724 } 7725 mac_stop_group(grp); 7726 mip->mi_tx_group_free++; 7727 } 7728 7729 /* 7730 * Disassociate a MAC client from a group, i.e go through the rings in the 7731 * group and delete all the soft rings tied to them. 7732 */ 7733 static void 7734 mac_tx_dismantle_soft_rings(mac_group_t *fgrp, flow_entry_t *flent) 7735 { 7736 mac_client_impl_t *mcip = flent->fe_mcip; 7737 mac_soft_ring_set_t *tx_srs; 7738 mac_srs_tx_t *tx; 7739 mac_ring_t *ring; 7740 7741 tx_srs = flent->fe_tx_srs; 7742 tx = &tx_srs->srs_tx; 7743 7744 /* Single ring case we haven't created any soft rings */ 7745 if (tx->st_mode == SRS_TX_BW || tx->st_mode == SRS_TX_SERIALIZE || 7746 tx->st_mode == SRS_TX_DEFAULT) { 7747 tx->st_arg2 = NULL; 7748 mac_srs_stat_delete(tx_srs); 7749 /* Fanout case, where we have to dismantle the soft rings */ 7750 } else { 7751 for (ring = fgrp->mrg_rings; ring != NULL; 7752 ring = ring->mr_next) { 7753 ASSERT(mac_tx_srs_ring_present(tx_srs, ring)); 7754 mac_tx_invoke_callbacks(mcip, 7755 (mac_tx_cookie_t)mac_tx_srs_get_soft_ring(tx_srs, 7756 ring)); 7757 mac_tx_srs_del_ring(tx_srs, ring); 7758 } 7759 ASSERT(tx->st_arg2 == NULL); 7760 } 7761 } 7762 7763 /* 7764 * Switch the MAC client from one group to another. This means we need 7765 * to remove the MAC client, teardown the SRSs and revert the group state. 7766 * Then, we add the client to the destination roup, set the SRSs etc. 7767 */ 7768 void 7769 mac_tx_switch_group(mac_client_impl_t *mcip, mac_group_t *fgrp, 7770 mac_group_t *tgrp) 7771 { 7772 mac_client_impl_t *group_only_mcip; 7773 mac_impl_t *mip = mcip->mci_mip; 7774 flow_entry_t *flent = mcip->mci_flent; 7775 mac_group_t *defgrp; 7776 mac_grp_client_t *mgcp; 7777 mac_client_impl_t *gmcip; 7778 flow_entry_t *gflent; 7779 7780 defgrp = MAC_DEFAULT_TX_GROUP(mip); 7781 ASSERT(fgrp == flent->fe_tx_ring_group); 7782 7783 if (fgrp == defgrp) { 7784 /* 7785 * If this is the primary we need to find any VLANs on 7786 * the primary and move them too. 7787 */ 7788 mac_group_remove_client(fgrp, mcip); 7789 mac_tx_dismantle_soft_rings(fgrp, flent); 7790 if (mac_check_macaddr_shared(mcip->mci_unicast)) { 7791 mgcp = fgrp->mrg_clients; 7792 while (mgcp != NULL) { 7793 gmcip = mgcp->mgc_client; 7794 mgcp = mgcp->mgc_next; 7795 if (mcip->mci_unicast != gmcip->mci_unicast) 7796 continue; 7797 mac_tx_client_quiesce( 7798 (mac_client_handle_t)gmcip); 7799 7800 gflent = gmcip->mci_flent; 7801 mac_group_remove_client(fgrp, gmcip); 7802 mac_tx_dismantle_soft_rings(fgrp, gflent); 7803 7804 mac_group_add_client(tgrp, gmcip); 7805 gflent->fe_tx_ring_group = tgrp; 7806 /* We could directly set this to SHARED */ 7807 tgrp->mrg_state = mac_group_next_state(tgrp, 7808 &group_only_mcip, defgrp, B_FALSE); 7809 7810 mac_tx_srs_group_setup(gmcip, gflent, 7811 SRST_LINK); 7812 mac_fanout_setup(gmcip, gflent, 7813 MCIP_RESOURCE_PROPS(gmcip), mac_rx_deliver, 7814 gmcip, NULL, NULL); 7815 7816 mac_tx_client_restart( 7817 (mac_client_handle_t)gmcip); 7818 } 7819 } 7820 if (MAC_GROUP_NO_CLIENT(fgrp)) { 7821 mac_ring_t *ring; 7822 int cnt; 7823 int ringcnt; 7824 7825 fgrp->mrg_state = MAC_GROUP_STATE_REGISTERED; 7826 /* 7827 * Additionally, we also need to stop all 7828 * the rings in the default group, except 7829 * the default ring. The reason being 7830 * this group won't be released since it is 7831 * the default group, so the rings won't 7832 * be stopped otherwise. 7833 */ 7834 ringcnt = fgrp->mrg_cur_count; 7835 ring = fgrp->mrg_rings; 7836 for (cnt = 0; cnt < ringcnt; cnt++) { 7837 if (ring->mr_state == MR_INUSE && 7838 ring != 7839 (mac_ring_t *)mip->mi_default_tx_ring) { 7840 mac_stop_ring(ring); 7841 ring->mr_flag = 0; 7842 } 7843 ring = ring->mr_next; 7844 } 7845 } else if (MAC_GROUP_ONLY_CLIENT(fgrp) != NULL) { 7846 fgrp->mrg_state = MAC_GROUP_STATE_RESERVED; 7847 } else { 7848 ASSERT(fgrp->mrg_state == MAC_GROUP_STATE_SHARED); 7849 } 7850 } else { 7851 /* 7852 * We could have VLANs sharing the non-default group with 7853 * the primary. 7854 */ 7855 mgcp = fgrp->mrg_clients; 7856 while (mgcp != NULL) { 7857 gmcip = mgcp->mgc_client; 7858 mgcp = mgcp->mgc_next; 7859 if (gmcip == mcip) 7860 continue; 7861 mac_tx_client_quiesce((mac_client_handle_t)gmcip); 7862 gflent = gmcip->mci_flent; 7863 7864 mac_group_remove_client(fgrp, gmcip); 7865 mac_tx_dismantle_soft_rings(fgrp, gflent); 7866 7867 mac_group_add_client(tgrp, gmcip); 7868 gflent->fe_tx_ring_group = tgrp; 7869 /* We could directly set this to SHARED */ 7870 tgrp->mrg_state = mac_group_next_state(tgrp, 7871 &group_only_mcip, defgrp, B_FALSE); 7872 mac_tx_srs_group_setup(gmcip, gflent, SRST_LINK); 7873 mac_fanout_setup(gmcip, gflent, 7874 MCIP_RESOURCE_PROPS(gmcip), mac_rx_deliver, 7875 gmcip, NULL, NULL); 7876 7877 mac_tx_client_restart((mac_client_handle_t)gmcip); 7878 } 7879 mac_group_remove_client(fgrp, mcip); 7880 mac_release_tx_group(mcip, fgrp); 7881 fgrp->mrg_state = MAC_GROUP_STATE_REGISTERED; 7882 } 7883 7884 /* Add it to the tgroup */ 7885 mac_group_add_client(tgrp, mcip); 7886 flent->fe_tx_ring_group = tgrp; 7887 tgrp->mrg_state = mac_group_next_state(tgrp, &group_only_mcip, 7888 defgrp, B_FALSE); 7889 7890 mac_tx_srs_group_setup(mcip, flent, SRST_LINK); 7891 mac_fanout_setup(mcip, flent, MCIP_RESOURCE_PROPS(mcip), 7892 mac_rx_deliver, mcip, NULL, NULL); 7893 } 7894 7895 /* 7896 * This is a 1-time control path activity initiated by the client (IP). 7897 * The mac perimeter protects against other simultaneous control activities, 7898 * for example an ioctl that attempts to change the degree of fanout and 7899 * increase or decrease the number of softrings associated with this Tx SRS. 7900 */ 7901 static mac_tx_notify_cb_t * 7902 mac_client_tx_notify_add(mac_client_impl_t *mcip, 7903 mac_tx_notify_t notify, void *arg) 7904 { 7905 mac_cb_info_t *mcbi; 7906 mac_tx_notify_cb_t *mtnfp; 7907 7908 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip)); 7909 7910 mtnfp = kmem_zalloc(sizeof (mac_tx_notify_cb_t), KM_SLEEP); 7911 mtnfp->mtnf_fn = notify; 7912 mtnfp->mtnf_arg = arg; 7913 mtnfp->mtnf_link.mcb_objp = mtnfp; 7914 mtnfp->mtnf_link.mcb_objsize = sizeof (mac_tx_notify_cb_t); 7915 mtnfp->mtnf_link.mcb_flags = MCB_TX_NOTIFY_CB_T; 7916 7917 mcbi = &mcip->mci_tx_notify_cb_info; 7918 mutex_enter(mcbi->mcbi_lockp); 7919 mac_callback_add(mcbi, &mcip->mci_tx_notify_cb_list, &mtnfp->mtnf_link); 7920 mutex_exit(mcbi->mcbi_lockp); 7921 return (mtnfp); 7922 } 7923 7924 static void 7925 mac_client_tx_notify_remove(mac_client_impl_t *mcip, mac_tx_notify_cb_t *mtnfp) 7926 { 7927 mac_cb_info_t *mcbi; 7928 mac_cb_t **cblist; 7929 7930 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip)); 7931 7932 if (!mac_callback_find(&mcip->mci_tx_notify_cb_info, 7933 &mcip->mci_tx_notify_cb_list, &mtnfp->mtnf_link)) { 7934 cmn_err(CE_WARN, 7935 "mac_client_tx_notify_remove: callback not " 7936 "found, mcip 0x%p mtnfp 0x%p", (void *)mcip, (void *)mtnfp); 7937 return; 7938 } 7939 7940 mcbi = &mcip->mci_tx_notify_cb_info; 7941 cblist = &mcip->mci_tx_notify_cb_list; 7942 mutex_enter(mcbi->mcbi_lockp); 7943 if (mac_callback_remove(mcbi, cblist, &mtnfp->mtnf_link)) 7944 kmem_free(mtnfp, sizeof (mac_tx_notify_cb_t)); 7945 else 7946 mac_callback_remove_wait(&mcip->mci_tx_notify_cb_info); 7947 mutex_exit(mcbi->mcbi_lockp); 7948 } 7949 7950 /* 7951 * mac_client_tx_notify(): 7952 * call to add and remove flow control callback routine. 7953 */ 7954 mac_tx_notify_handle_t 7955 mac_client_tx_notify(mac_client_handle_t mch, mac_tx_notify_t callb_func, 7956 void *ptr) 7957 { 7958 mac_client_impl_t *mcip = (mac_client_impl_t *)mch; 7959 mac_tx_notify_cb_t *mtnfp = NULL; 7960 7961 i_mac_perim_enter(mcip->mci_mip); 7962 7963 if (callb_func != NULL) { 7964 /* Add a notify callback */ 7965 mtnfp = mac_client_tx_notify_add(mcip, callb_func, ptr); 7966 } else { 7967 mac_client_tx_notify_remove(mcip, (mac_tx_notify_cb_t *)ptr); 7968 } 7969 i_mac_perim_exit(mcip->mci_mip); 7970 7971 return ((mac_tx_notify_handle_t)mtnfp); 7972 } 7973 7974 void 7975 mac_bridge_vectors(mac_bridge_tx_t txf, mac_bridge_rx_t rxf, 7976 mac_bridge_ref_t reff, mac_bridge_ls_t lsf) 7977 { 7978 mac_bridge_tx_cb = txf; 7979 mac_bridge_rx_cb = rxf; 7980 mac_bridge_ref_cb = reff; 7981 mac_bridge_ls_cb = lsf; 7982 } 7983 7984 int 7985 mac_bridge_set(mac_handle_t mh, mac_handle_t link) 7986 { 7987 mac_impl_t *mip = (mac_impl_t *)mh; 7988 int retv; 7989 7990 mutex_enter(&mip->mi_bridge_lock); 7991 if (mip->mi_bridge_link == NULL) { 7992 mip->mi_bridge_link = link; 7993 retv = 0; 7994 } else { 7995 retv = EBUSY; 7996 } 7997 mutex_exit(&mip->mi_bridge_lock); 7998 if (retv == 0) { 7999 mac_poll_state_change(mh, B_FALSE); 8000 mac_capab_update(mh); 8001 } 8002 return (retv); 8003 } 8004 8005 /* 8006 * Disable bridging on the indicated link. 8007 */ 8008 void 8009 mac_bridge_clear(mac_handle_t mh, mac_handle_t link) 8010 { 8011 mac_impl_t *mip = (mac_impl_t *)mh; 8012 8013 mutex_enter(&mip->mi_bridge_lock); 8014 ASSERT(mip->mi_bridge_link == link); 8015 mip->mi_bridge_link = NULL; 8016 mutex_exit(&mip->mi_bridge_lock); 8017 mac_poll_state_change(mh, B_TRUE); 8018 mac_capab_update(mh); 8019 } 8020 8021 void 8022 mac_no_active(mac_handle_t mh) 8023 { 8024 mac_impl_t *mip = (mac_impl_t *)mh; 8025 8026 i_mac_perim_enter(mip); 8027 mip->mi_state_flags |= MIS_NO_ACTIVE; 8028 i_mac_perim_exit(mip); 8029 } 8030 8031 /* 8032 * Walk the primary VLAN clients whenever the primary's rings property 8033 * changes and update the mac_resource_props_t for the VLAN's client. 8034 * We need to do this since we don't support setting these properties 8035 * on the primary's VLAN clients, but the VLAN clients have to 8036 * follow the primary w.r.t the rings property. 8037 */ 8038 void 8039 mac_set_prim_vlan_rings(mac_impl_t *mip, mac_resource_props_t *mrp) 8040 { 8041 mac_client_impl_t *vmcip; 8042 mac_resource_props_t *vmrp; 8043 8044 for (vmcip = mip->mi_clients_list; vmcip != NULL; 8045 vmcip = vmcip->mci_client_next) { 8046 if (!(vmcip->mci_flent->fe_type & FLOW_PRIMARY_MAC) || 8047 mac_client_vid((mac_client_handle_t)vmcip) == 8048 VLAN_ID_NONE) { 8049 continue; 8050 } 8051 vmrp = MCIP_RESOURCE_PROPS(vmcip); 8052 8053 vmrp->mrp_nrxrings = mrp->mrp_nrxrings; 8054 if (mrp->mrp_mask & MRP_RX_RINGS) 8055 vmrp->mrp_mask |= MRP_RX_RINGS; 8056 else if (vmrp->mrp_mask & MRP_RX_RINGS) 8057 vmrp->mrp_mask &= ~MRP_RX_RINGS; 8058 8059 vmrp->mrp_ntxrings = mrp->mrp_ntxrings; 8060 if (mrp->mrp_mask & MRP_TX_RINGS) 8061 vmrp->mrp_mask |= MRP_TX_RINGS; 8062 else if (vmrp->mrp_mask & MRP_TX_RINGS) 8063 vmrp->mrp_mask &= ~MRP_TX_RINGS; 8064 8065 if (mrp->mrp_mask & MRP_RXRINGS_UNSPEC) 8066 vmrp->mrp_mask |= MRP_RXRINGS_UNSPEC; 8067 else 8068 vmrp->mrp_mask &= ~MRP_RXRINGS_UNSPEC; 8069 8070 if (mrp->mrp_mask & MRP_TXRINGS_UNSPEC) 8071 vmrp->mrp_mask |= MRP_TXRINGS_UNSPEC; 8072 else 8073 vmrp->mrp_mask &= ~MRP_TXRINGS_UNSPEC; 8074 } 8075 } 8076 8077 /* 8078 * We are adding or removing ring(s) from a group. The source for taking 8079 * rings is the default group. The destination for giving rings back is 8080 * the default group. 8081 */ 8082 int 8083 mac_group_ring_modify(mac_client_impl_t *mcip, mac_group_t *group, 8084 mac_group_t *defgrp) 8085 { 8086 mac_resource_props_t *mrp = MCIP_RESOURCE_PROPS(mcip); 8087 uint_t modify; 8088 int count; 8089 mac_ring_t *ring; 8090 mac_ring_t *next; 8091 mac_impl_t *mip = mcip->mci_mip; 8092 mac_ring_t **rings; 8093 uint_t ringcnt; 8094 int i = 0; 8095 boolean_t rx_group = group->mrg_type == MAC_RING_TYPE_RX; 8096 int start; 8097 int end; 8098 mac_group_t *tgrp; 8099 int j; 8100 int rv = 0; 8101 8102 /* 8103 * If we are asked for just a group, we give 1 ring, else 8104 * the specified number of rings. 8105 */ 8106 if (rx_group) { 8107 ringcnt = (mrp->mrp_mask & MRP_RXRINGS_UNSPEC) ? 1: 8108 mrp->mrp_nrxrings; 8109 } else { 8110 ringcnt = (mrp->mrp_mask & MRP_TXRINGS_UNSPEC) ? 1: 8111 mrp->mrp_ntxrings; 8112 } 8113 8114 /* don't allow modifying rings for a share for now. */ 8115 ASSERT(mcip->mci_share == 0); 8116 8117 if (ringcnt == group->mrg_cur_count) 8118 return (0); 8119 8120 if (group->mrg_cur_count > ringcnt) { 8121 modify = group->mrg_cur_count - ringcnt; 8122 if (rx_group) { 8123 if (mip->mi_rx_donor_grp == group) { 8124 ASSERT(mac_is_primary_client(mcip)); 8125 mip->mi_rx_donor_grp = defgrp; 8126 } else { 8127 defgrp = mip->mi_rx_donor_grp; 8128 } 8129 } 8130 ring = group->mrg_rings; 8131 rings = kmem_alloc(modify * sizeof (mac_ring_handle_t), 8132 KM_SLEEP); 8133 j = 0; 8134 for (count = 0; count < modify; count++) { 8135 next = ring->mr_next; 8136 rv = mac_group_mov_ring(mip, defgrp, ring); 8137 if (rv != 0) { 8138 /* cleanup on failure */ 8139 for (j = 0; j < count; j++) { 8140 (void) mac_group_mov_ring(mip, group, 8141 rings[j]); 8142 } 8143 break; 8144 } 8145 rings[j++] = ring; 8146 ring = next; 8147 } 8148 kmem_free(rings, modify * sizeof (mac_ring_handle_t)); 8149 return (rv); 8150 } 8151 if (ringcnt >= MAX_RINGS_PER_GROUP) 8152 return (EINVAL); 8153 8154 modify = ringcnt - group->mrg_cur_count; 8155 8156 if (rx_group) { 8157 if (group != mip->mi_rx_donor_grp) 8158 defgrp = mip->mi_rx_donor_grp; 8159 else 8160 /* 8161 * This is the donor group with all the remaining 8162 * rings. Default group now gets to be the donor 8163 */ 8164 mip->mi_rx_donor_grp = defgrp; 8165 start = 1; 8166 end = mip->mi_rx_group_count; 8167 } else { 8168 start = 0; 8169 end = mip->mi_tx_group_count - 1; 8170 } 8171 /* 8172 * If the default doesn't have any rings, lets see if we can 8173 * take rings given to an h/w client that doesn't need it. 8174 * For now, we just see if there is any one client that can donate 8175 * all the required rings. 8176 */ 8177 if (defgrp->mrg_cur_count < (modify + 1)) { 8178 for (i = start; i < end; i++) { 8179 if (rx_group) { 8180 tgrp = &mip->mi_rx_groups[i]; 8181 if (tgrp == group || tgrp->mrg_state < 8182 MAC_GROUP_STATE_RESERVED) { 8183 continue; 8184 } 8185 if (i_mac_clients_hw(tgrp, MRP_RX_RINGS)) 8186 continue; 8187 mcip = tgrp->mrg_clients->mgc_client; 8188 VERIFY3P(mcip, !=, NULL); 8189 if ((tgrp->mrg_cur_count + 8190 defgrp->mrg_cur_count) < (modify + 1)) { 8191 continue; 8192 } 8193 if (mac_rx_switch_group(mcip, tgrp, 8194 defgrp) != 0) { 8195 return (ENOSPC); 8196 } 8197 } else { 8198 tgrp = &mip->mi_tx_groups[i]; 8199 if (tgrp == group || tgrp->mrg_state < 8200 MAC_GROUP_STATE_RESERVED) { 8201 continue; 8202 } 8203 if (i_mac_clients_hw(tgrp, MRP_TX_RINGS)) 8204 continue; 8205 mcip = tgrp->mrg_clients->mgc_client; 8206 VERIFY3P(mcip, !=, NULL); 8207 if ((tgrp->mrg_cur_count + 8208 defgrp->mrg_cur_count) < (modify + 1)) { 8209 continue; 8210 } 8211 /* OK, we can switch this to s/w */ 8212 mac_tx_client_quiesce( 8213 (mac_client_handle_t)mcip); 8214 mac_tx_switch_group(mcip, tgrp, defgrp); 8215 mac_tx_client_restart( 8216 (mac_client_handle_t)mcip); 8217 } 8218 } 8219 if (defgrp->mrg_cur_count < (modify + 1)) 8220 return (ENOSPC); 8221 } 8222 if ((rv = i_mac_group_allocate_rings(mip, group->mrg_type, defgrp, 8223 group, mcip->mci_share, modify)) != 0) { 8224 return (rv); 8225 } 8226 return (0); 8227 } 8228 8229 /* 8230 * Given the poolname in mac_resource_props, find the cpupart 8231 * that is associated with this pool. The cpupart will be used 8232 * later for finding the cpus to be bound to the networking threads. 8233 * 8234 * use_default is set B_TRUE if pools are enabled and pool_default 8235 * is returned. This avoids a 2nd lookup to set the poolname 8236 * for pool-effective. 8237 * 8238 * returns: 8239 * 8240 * NULL - pools are disabled or if the 'cpus' property is set. 8241 * cpupart of pool_default - pools are enabled and the pool 8242 * is not available or poolname is blank 8243 * cpupart of named pool - pools are enabled and the pool 8244 * is available. 8245 */ 8246 cpupart_t * 8247 mac_pset_find(mac_resource_props_t *mrp, boolean_t *use_default) 8248 { 8249 pool_t *pool; 8250 cpupart_t *cpupart; 8251 8252 *use_default = B_FALSE; 8253 8254 /* CPUs property is set */ 8255 if (mrp->mrp_mask & MRP_CPUS) 8256 return (NULL); 8257 8258 ASSERT(pool_lock_held()); 8259 8260 /* Pools are disabled, no pset */ 8261 if (pool_state == POOL_DISABLED) 8262 return (NULL); 8263 8264 /* Pools property is set */ 8265 if (mrp->mrp_mask & MRP_POOL) { 8266 if ((pool = pool_lookup_pool_by_name(mrp->mrp_pool)) == NULL) { 8267 /* Pool not found */ 8268 DTRACE_PROBE1(mac_pset_find_no_pool, char *, 8269 mrp->mrp_pool); 8270 *use_default = B_TRUE; 8271 pool = pool_default; 8272 } 8273 /* Pools property is not set */ 8274 } else { 8275 *use_default = B_TRUE; 8276 pool = pool_default; 8277 } 8278 8279 /* Find the CPU pset that corresponds to the pool */ 8280 mutex_enter(&cpu_lock); 8281 if ((cpupart = cpupart_find(pool->pool_pset->pset_id)) == NULL) { 8282 DTRACE_PROBE1(mac_find_pset_no_pset, psetid_t, 8283 pool->pool_pset->pset_id); 8284 } 8285 mutex_exit(&cpu_lock); 8286 8287 return (cpupart); 8288 } 8289 8290 void 8291 mac_set_pool_effective(boolean_t use_default, cpupart_t *cpupart, 8292 mac_resource_props_t *mrp, mac_resource_props_t *emrp) 8293 { 8294 ASSERT(pool_lock_held()); 8295 8296 if (cpupart != NULL) { 8297 emrp->mrp_mask |= MRP_POOL; 8298 if (use_default) { 8299 (void) strcpy(emrp->mrp_pool, 8300 "pool_default"); 8301 } else { 8302 ASSERT(strlen(mrp->mrp_pool) != 0); 8303 (void) strcpy(emrp->mrp_pool, 8304 mrp->mrp_pool); 8305 } 8306 } else { 8307 emrp->mrp_mask &= ~MRP_POOL; 8308 bzero(emrp->mrp_pool, MAXPATHLEN); 8309 } 8310 } 8311 8312 struct mac_pool_arg { 8313 char mpa_poolname[MAXPATHLEN]; 8314 pool_event_t mpa_what; 8315 }; 8316 8317 /*ARGSUSED*/ 8318 static uint_t 8319 mac_pool_link_update(mod_hash_key_t key, mod_hash_val_t *val, void *arg) 8320 { 8321 struct mac_pool_arg *mpa = arg; 8322 mac_impl_t *mip = (mac_impl_t *)val; 8323 mac_client_impl_t *mcip; 8324 mac_resource_props_t *mrp, *emrp; 8325 boolean_t pool_update = B_FALSE; 8326 boolean_t pool_clear = B_FALSE; 8327 boolean_t use_default = B_FALSE; 8328 cpupart_t *cpupart = NULL; 8329 8330 mrp = kmem_zalloc(sizeof (*mrp), KM_SLEEP); 8331 i_mac_perim_enter(mip); 8332 for (mcip = mip->mi_clients_list; mcip != NULL; 8333 mcip = mcip->mci_client_next) { 8334 pool_update = B_FALSE; 8335 pool_clear = B_FALSE; 8336 use_default = B_FALSE; 8337 mac_client_get_resources((mac_client_handle_t)mcip, mrp); 8338 emrp = MCIP_EFFECTIVE_PROPS(mcip); 8339 8340 /* 8341 * When pools are enabled 8342 */ 8343 if ((mpa->mpa_what == POOL_E_ENABLE) && 8344 ((mrp->mrp_mask & MRP_CPUS) == 0)) { 8345 mrp->mrp_mask |= MRP_POOL; 8346 pool_update = B_TRUE; 8347 } 8348 8349 /* 8350 * When pools are disabled 8351 */ 8352 if ((mpa->mpa_what == POOL_E_DISABLE) && 8353 ((mrp->mrp_mask & MRP_CPUS) == 0)) { 8354 mrp->mrp_mask |= MRP_POOL; 8355 pool_clear = B_TRUE; 8356 } 8357 8358 /* 8359 * Look for links with the pool property set and the poolname 8360 * matching the one which is changing. 8361 */ 8362 if (strcmp(mrp->mrp_pool, mpa->mpa_poolname) == 0) { 8363 /* 8364 * The pool associated with the link has changed. 8365 */ 8366 if (mpa->mpa_what == POOL_E_CHANGE) { 8367 mrp->mrp_mask |= MRP_POOL; 8368 pool_update = B_TRUE; 8369 } 8370 } 8371 8372 /* 8373 * This link is associated with pool_default and 8374 * pool_default has changed. 8375 */ 8376 if ((mpa->mpa_what == POOL_E_CHANGE) && 8377 (strcmp(emrp->mrp_pool, "pool_default") == 0) && 8378 (strcmp(mpa->mpa_poolname, "pool_default") == 0)) { 8379 mrp->mrp_mask |= MRP_POOL; 8380 pool_update = B_TRUE; 8381 } 8382 8383 /* 8384 * Get new list of cpus for the pool, bind network 8385 * threads to new list of cpus and update resources. 8386 */ 8387 if (pool_update) { 8388 if (MCIP_DATAPATH_SETUP(mcip)) { 8389 pool_lock(); 8390 cpupart = mac_pset_find(mrp, &use_default); 8391 mac_fanout_setup(mcip, mcip->mci_flent, mrp, 8392 mac_rx_deliver, mcip, NULL, cpupart); 8393 mac_set_pool_effective(use_default, cpupart, 8394 mrp, emrp); 8395 pool_unlock(); 8396 } 8397 mac_update_resources(mrp, MCIP_RESOURCE_PROPS(mcip), 8398 B_FALSE); 8399 } 8400 8401 /* 8402 * Clear the effective pool and bind network threads 8403 * to any available CPU. 8404 */ 8405 if (pool_clear) { 8406 if (MCIP_DATAPATH_SETUP(mcip)) { 8407 emrp->mrp_mask &= ~MRP_POOL; 8408 bzero(emrp->mrp_pool, MAXPATHLEN); 8409 mac_fanout_setup(mcip, mcip->mci_flent, mrp, 8410 mac_rx_deliver, mcip, NULL, NULL); 8411 } 8412 mac_update_resources(mrp, MCIP_RESOURCE_PROPS(mcip), 8413 B_FALSE); 8414 } 8415 } 8416 i_mac_perim_exit(mip); 8417 kmem_free(mrp, sizeof (*mrp)); 8418 return (MH_WALK_CONTINUE); 8419 } 8420 8421 static void 8422 mac_pool_update(void *arg) 8423 { 8424 mod_hash_walk(i_mac_impl_hash, mac_pool_link_update, arg); 8425 kmem_free(arg, sizeof (struct mac_pool_arg)); 8426 } 8427 8428 /* 8429 * Callback function to be executed when a noteworthy pool event 8430 * takes place. 8431 */ 8432 /* ARGSUSED */ 8433 static void 8434 mac_pool_event_cb(pool_event_t what, poolid_t id, void *arg) 8435 { 8436 pool_t *pool; 8437 char *poolname = NULL; 8438 struct mac_pool_arg *mpa; 8439 8440 pool_lock(); 8441 mpa = kmem_zalloc(sizeof (struct mac_pool_arg), KM_SLEEP); 8442 8443 switch (what) { 8444 case POOL_E_ENABLE: 8445 case POOL_E_DISABLE: 8446 break; 8447 8448 case POOL_E_CHANGE: 8449 pool = pool_lookup_pool_by_id(id); 8450 if (pool == NULL) { 8451 kmem_free(mpa, sizeof (struct mac_pool_arg)); 8452 pool_unlock(); 8453 return; 8454 } 8455 pool_get_name(pool, &poolname); 8456 (void) strlcpy(mpa->mpa_poolname, poolname, 8457 sizeof (mpa->mpa_poolname)); 8458 break; 8459 8460 default: 8461 kmem_free(mpa, sizeof (struct mac_pool_arg)); 8462 pool_unlock(); 8463 return; 8464 } 8465 pool_unlock(); 8466 8467 mpa->mpa_what = what; 8468 8469 mac_pool_update(mpa); 8470 } 8471 8472 /* 8473 * Set effective rings property. This could be called from datapath_setup/ 8474 * datapath_teardown or set-linkprop. 8475 * If the group is reserved we just go ahead and set the effective rings. 8476 * Additionally, for TX this could mean the default group has lost/gained 8477 * some rings, so if the default group is reserved, we need to adjust the 8478 * effective rings for the default group clients. For RX, if we are working 8479 * with the non-default group, we just need to reset the effective props 8480 * for the default group clients. 8481 */ 8482 void 8483 mac_set_rings_effective(mac_client_impl_t *mcip) 8484 { 8485 mac_impl_t *mip = mcip->mci_mip; 8486 mac_group_t *grp; 8487 mac_group_t *defgrp; 8488 flow_entry_t *flent = mcip->mci_flent; 8489 mac_resource_props_t *emrp = MCIP_EFFECTIVE_PROPS(mcip); 8490 mac_grp_client_t *mgcp; 8491 mac_client_impl_t *gmcip; 8492 8493 grp = flent->fe_rx_ring_group; 8494 if (grp != NULL) { 8495 defgrp = MAC_DEFAULT_RX_GROUP(mip); 8496 /* 8497 * If we have reserved a group, set the effective rings 8498 * to the ring count in the group. 8499 */ 8500 if (grp->mrg_state == MAC_GROUP_STATE_RESERVED) { 8501 emrp->mrp_mask |= MRP_RX_RINGS; 8502 emrp->mrp_nrxrings = grp->mrg_cur_count; 8503 } 8504 8505 /* 8506 * We go through the clients in the shared group and 8507 * reset the effective properties. It is possible this 8508 * might have already been done for some client (i.e. 8509 * if some client is being moved to a group that is 8510 * already shared). The case where the default group is 8511 * RESERVED is taken care of above (note in the RX side if 8512 * there is a non-default group, the default group is always 8513 * SHARED). 8514 */ 8515 if (grp != defgrp || grp->mrg_state == MAC_GROUP_STATE_SHARED) { 8516 if (grp->mrg_state == MAC_GROUP_STATE_SHARED) 8517 mgcp = grp->mrg_clients; 8518 else 8519 mgcp = defgrp->mrg_clients; 8520 while (mgcp != NULL) { 8521 gmcip = mgcp->mgc_client; 8522 emrp = MCIP_EFFECTIVE_PROPS(gmcip); 8523 if (emrp->mrp_mask & MRP_RX_RINGS) { 8524 emrp->mrp_mask &= ~MRP_RX_RINGS; 8525 emrp->mrp_nrxrings = 0; 8526 } 8527 mgcp = mgcp->mgc_next; 8528 } 8529 } 8530 } 8531 8532 /* Now the TX side */ 8533 grp = flent->fe_tx_ring_group; 8534 if (grp != NULL) { 8535 defgrp = MAC_DEFAULT_TX_GROUP(mip); 8536 8537 if (grp->mrg_state == MAC_GROUP_STATE_RESERVED) { 8538 emrp->mrp_mask |= MRP_TX_RINGS; 8539 emrp->mrp_ntxrings = grp->mrg_cur_count; 8540 } else if (grp->mrg_state == MAC_GROUP_STATE_SHARED) { 8541 mgcp = grp->mrg_clients; 8542 while (mgcp != NULL) { 8543 gmcip = mgcp->mgc_client; 8544 emrp = MCIP_EFFECTIVE_PROPS(gmcip); 8545 if (emrp->mrp_mask & MRP_TX_RINGS) { 8546 emrp->mrp_mask &= ~MRP_TX_RINGS; 8547 emrp->mrp_ntxrings = 0; 8548 } 8549 mgcp = mgcp->mgc_next; 8550 } 8551 } 8552 8553 /* 8554 * If the group is not the default group and the default 8555 * group is reserved, the ring count in the default group 8556 * might have changed, update it. 8557 */ 8558 if (grp != defgrp && 8559 defgrp->mrg_state == MAC_GROUP_STATE_RESERVED) { 8560 gmcip = MAC_GROUP_ONLY_CLIENT(defgrp); 8561 emrp = MCIP_EFFECTIVE_PROPS(gmcip); 8562 emrp->mrp_ntxrings = defgrp->mrg_cur_count; 8563 } 8564 } 8565 emrp = MCIP_EFFECTIVE_PROPS(mcip); 8566 } 8567 8568 /* 8569 * Check if the primary is in the default group. If so, see if we 8570 * can give it a an exclusive group now that another client is 8571 * being configured. We take the primary out of the default group 8572 * because the multicast/broadcast packets for the all the clients 8573 * will land in the default ring in the default group which means 8574 * any client in the default group, even if it is the only on in 8575 * the group, will lose exclusive access to the rings, hence 8576 * polling. 8577 */ 8578 mac_client_impl_t * 8579 mac_check_primary_relocation(mac_client_impl_t *mcip, boolean_t rxhw) 8580 { 8581 mac_impl_t *mip = mcip->mci_mip; 8582 mac_group_t *defgrp = MAC_DEFAULT_RX_GROUP(mip); 8583 flow_entry_t *flent = mcip->mci_flent; 8584 mac_resource_props_t *mrp = MCIP_RESOURCE_PROPS(mcip); 8585 uint8_t *mac_addr; 8586 mac_group_t *ngrp; 8587 8588 /* 8589 * Check if the primary is in the default group, if not 8590 * or if it is explicitly configured to be in the default 8591 * group OR set the RX rings property, return. 8592 */ 8593 if (flent->fe_rx_ring_group != defgrp || mrp->mrp_mask & MRP_RX_RINGS) 8594 return (NULL); 8595 8596 /* 8597 * If the new client needs an exclusive group and we 8598 * don't have another for the primary, return. 8599 */ 8600 if (rxhw && mip->mi_rxhwclnt_avail < 2) 8601 return (NULL); 8602 8603 mac_addr = flent->fe_flow_desc.fd_dst_mac; 8604 /* 8605 * We call this when we are setting up the datapath for 8606 * the first non-primary. 8607 */ 8608 ASSERT(mip->mi_nactiveclients == 2); 8609 8610 /* 8611 * OK, now we have the primary that needs to be relocated. 8612 */ 8613 ngrp = mac_reserve_rx_group(mcip, mac_addr, B_TRUE); 8614 if (ngrp == NULL) 8615 return (NULL); 8616 if (mac_rx_switch_group(mcip, defgrp, ngrp) != 0) { 8617 mac_stop_group(ngrp); 8618 return (NULL); 8619 } 8620 return (mcip); 8621 } 8622 8623 void 8624 mac_transceiver_init(mac_impl_t *mip) 8625 { 8626 if (mac_capab_get((mac_handle_t)mip, MAC_CAPAB_TRANSCEIVER, 8627 &mip->mi_transceiver)) { 8628 /* 8629 * The driver set a flag that we don't know about. In this case, 8630 * we need to warn about that case and ignore this capability. 8631 */ 8632 if (mip->mi_transceiver.mct_flags != 0) { 8633 dev_err(mip->mi_dip, CE_WARN, "driver set transceiver " 8634 "flags to invalid value: 0x%x, ignoring " 8635 "capability", mip->mi_transceiver.mct_flags); 8636 bzero(&mip->mi_transceiver, 8637 sizeof (mac_capab_transceiver_t)); 8638 } 8639 } else { 8640 bzero(&mip->mi_transceiver, 8641 sizeof (mac_capab_transceiver_t)); 8642 } 8643 } 8644 8645 int 8646 mac_transceiver_count(mac_handle_t mh, uint_t *countp) 8647 { 8648 mac_impl_t *mip = (mac_impl_t *)mh; 8649 8650 ASSERT(MAC_PERIM_HELD(mh)); 8651 8652 if (mip->mi_transceiver.mct_ntransceivers == 0) 8653 return (ENOTSUP); 8654 8655 *countp = mip->mi_transceiver.mct_ntransceivers; 8656 return (0); 8657 } 8658 8659 int 8660 mac_transceiver_info(mac_handle_t mh, uint_t tranid, boolean_t *present, 8661 boolean_t *usable) 8662 { 8663 int ret; 8664 mac_transceiver_info_t info; 8665 8666 mac_impl_t *mip = (mac_impl_t *)mh; 8667 8668 ASSERT(MAC_PERIM_HELD(mh)); 8669 8670 if (mip->mi_transceiver.mct_info == NULL || 8671 mip->mi_transceiver.mct_ntransceivers == 0) 8672 return (ENOTSUP); 8673 8674 if (tranid >= mip->mi_transceiver.mct_ntransceivers) 8675 return (EINVAL); 8676 8677 bzero(&info, sizeof (mac_transceiver_info_t)); 8678 if ((ret = mip->mi_transceiver.mct_info(mip->mi_driver, tranid, 8679 &info)) != 0) { 8680 return (ret); 8681 } 8682 8683 *present = info.mti_present; 8684 *usable = info.mti_usable; 8685 return (0); 8686 } 8687 8688 int 8689 mac_transceiver_read(mac_handle_t mh, uint_t tranid, uint_t page, void *buf, 8690 size_t nbytes, off_t offset, size_t *nread) 8691 { 8692 int ret; 8693 size_t nr; 8694 mac_impl_t *mip = (mac_impl_t *)mh; 8695 8696 ASSERT(MAC_PERIM_HELD(mh)); 8697 8698 if (mip->mi_transceiver.mct_read == NULL) 8699 return (ENOTSUP); 8700 8701 if (tranid >= mip->mi_transceiver.mct_ntransceivers) 8702 return (EINVAL); 8703 8704 /* 8705 * All supported pages today are 256 bytes wide. Make sure offset + 8706 * nbytes never exceeds that. 8707 */ 8708 if (offset < 0 || offset >= 256 || nbytes > 256 || 8709 offset + nbytes > 256) 8710 return (EINVAL); 8711 8712 if (nread == NULL) 8713 nread = &nr; 8714 ret = mip->mi_transceiver.mct_read(mip->mi_driver, tranid, page, buf, 8715 nbytes, offset, nread); 8716 if (ret == 0 && *nread > nbytes) { 8717 dev_err(mip->mi_dip, CE_PANIC, "driver wrote %lu bytes into " 8718 "%lu byte sized buffer, possible memory corruption", 8719 *nread, nbytes); 8720 } 8721 8722 return (ret); 8723 } 8724 8725 void 8726 mac_led_init(mac_impl_t *mip) 8727 { 8728 mip->mi_led_modes = MAC_LED_DEFAULT; 8729 8730 if (!mac_capab_get((mac_handle_t)mip, MAC_CAPAB_LED, &mip->mi_led)) { 8731 bzero(&mip->mi_led, sizeof (mac_capab_led_t)); 8732 return; 8733 } 8734 8735 if (mip->mi_led.mcl_flags != 0) { 8736 dev_err(mip->mi_dip, CE_WARN, "driver set led capability " 8737 "flags to invalid value: 0x%x, ignoring " 8738 "capability", mip->mi_transceiver.mct_flags); 8739 bzero(&mip->mi_led, sizeof (mac_capab_led_t)); 8740 return; 8741 } 8742 8743 if ((mip->mi_led.mcl_modes & ~MAC_LED_ALL) != 0) { 8744 dev_err(mip->mi_dip, CE_WARN, "driver set led capability " 8745 "supported modes to invalid value: 0x%x, ignoring " 8746 "capability", mip->mi_transceiver.mct_flags); 8747 bzero(&mip->mi_led, sizeof (mac_capab_led_t)); 8748 return; 8749 } 8750 } 8751 8752 int 8753 mac_led_get(mac_handle_t mh, mac_led_mode_t *supported, mac_led_mode_t *active) 8754 { 8755 mac_impl_t *mip = (mac_impl_t *)mh; 8756 8757 ASSERT(MAC_PERIM_HELD(mh)); 8758 8759 if (mip->mi_led.mcl_set == NULL) 8760 return (ENOTSUP); 8761 8762 *supported = mip->mi_led.mcl_modes; 8763 *active = mip->mi_led_modes; 8764 8765 return (0); 8766 } 8767 8768 /* 8769 * Update and multiplex the various LED requests. We only ever send one LED to 8770 * the underlying driver at a time. As such, we end up multiplexing all 8771 * requested states and picking one to send down to the driver. 8772 */ 8773 int 8774 mac_led_set(mac_handle_t mh, mac_led_mode_t desired) 8775 { 8776 int ret; 8777 mac_led_mode_t driver; 8778 8779 mac_impl_t *mip = (mac_impl_t *)mh; 8780 8781 ASSERT(MAC_PERIM_HELD(mh)); 8782 8783 /* 8784 * If we've been passed a desired value of zero, that indicates that 8785 * we're basically resetting to the value of zero, which is our default 8786 * value. 8787 */ 8788 if (desired == 0) 8789 desired = MAC_LED_DEFAULT; 8790 8791 if (mip->mi_led.mcl_set == NULL) 8792 return (ENOTSUP); 8793 8794 /* 8795 * Catch both values that we don't know about and those that the driver 8796 * doesn't support. 8797 */ 8798 if ((desired & ~MAC_LED_ALL) != 0) 8799 return (EINVAL); 8800 8801 if ((desired & ~mip->mi_led.mcl_modes) != 0) 8802 return (ENOTSUP); 8803 8804 /* 8805 * If we have the same value, then there is nothing to do. 8806 */ 8807 if (desired == mip->mi_led_modes) 8808 return (0); 8809 8810 /* 8811 * Based on the desired value, determine what to send to the driver. We 8812 * only will send a single bit to the driver at any given time. IDENT 8813 * takes priority over OFF or ON. We also let OFF take priority over the 8814 * rest. 8815 */ 8816 if (desired & MAC_LED_IDENT) { 8817 driver = MAC_LED_IDENT; 8818 } else if (desired & MAC_LED_OFF) { 8819 driver = MAC_LED_OFF; 8820 } else if (desired & MAC_LED_ON) { 8821 driver = MAC_LED_ON; 8822 } else { 8823 driver = MAC_LED_DEFAULT; 8824 } 8825 8826 if ((ret = mip->mi_led.mcl_set(mip->mi_driver, driver, 0)) == 0) { 8827 mip->mi_led_modes = desired; 8828 } 8829 8830 return (ret); 8831 } 8832 8833 /* 8834 * Send packets through the Tx ring ('mrh') or through the default 8835 * handler if no ring is specified. Before passing the packet down to 8836 * the MAC provider, emulate any hardware offloads which have been 8837 * requested but are not supported by the provider. 8838 */ 8839 mblk_t * 8840 mac_ring_tx(mac_handle_t mh, mac_ring_handle_t mrh, mblk_t *mp) 8841 { 8842 mac_impl_t *mip = (mac_impl_t *)mh; 8843 8844 if (mrh == NULL) 8845 mrh = mip->mi_default_tx_ring; 8846 8847 if (mrh == NULL) 8848 return (mip->mi_tx(mip->mi_driver, mp)); 8849 else 8850 return (mac_hwring_tx(mrh, mp)); 8851 } 8852 8853 /* 8854 * This is the final stop before reaching the underlying MAC provider. 8855 * This is also where the bridging hook is inserted. Packets that are 8856 * bridged will return through mac_bridge_tx(), with rh nulled out if 8857 * the bridge chooses to send output on a different link due to 8858 * forwarding. 8859 */ 8860 mblk_t * 8861 mac_provider_tx(mac_impl_t *mip, mac_ring_handle_t rh, mblk_t *mp, 8862 mac_client_impl_t *mcip) 8863 { 8864 /* 8865 * If there is a bound Hybrid I/O share, send packets through 8866 * the default tx ring. When there's a bound Hybrid I/O share, 8867 * the tx rings of this client are mapped in the guest domain 8868 * and not accessible from here. 8869 */ 8870 if (mcip->mci_state_flags & MCIS_SHARE_BOUND) 8871 rh = mip->mi_default_tx_ring; 8872 8873 if (mip->mi_promisc_list != NULL) 8874 mac_promisc_dispatch(mip, mp, mcip, B_FALSE); 8875 8876 if (mip->mi_bridge_link == NULL) 8877 return (mac_ring_tx((mac_handle_t)mip, rh, mp)); 8878 else 8879 return (mac_bridge_tx(mip, rh, mp)); 8880 } 8881