Home:ALL Converter>What is the difference between kernel threads and user threads?

What is the difference between kernel threads and user threads?

Ask Time:2011-02-14T00:00:58         Author:tazim

Json Formatter

What is the difference between kernel threads and user threads? Is it that kernel thread are scheduled and executed in kernel mode? What are techniques used for creating kernel threads?

Is it that user thread is scheduled, executed in user mode? Is it that Kernel does not participate in executing/scheduling user threads? When interrupts occur in executing user thread then who handles it?

Whenever, thread is created a TCB is created for each. now in case of user level threads Is it that this TCB is created in user's address space ?

In case of switching between two user level threads who handles the context switching ?

There is a concept of multithreading models :

  1. Many to one
  2. One to one
  3. Many to Many.

What are these models? How are these models practically used?

Have read few articles on this topic but still confused
Wants to clear the concept ..

Thanks in advance, Tazim

Author:tazim,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/4985182/what-is-the-difference-between-kernel-threads-and-user-threads
Jeff :

\n What is the difference between kernel threads and user threads?\n\n\nKernel threads are privileged and can access things off-limits to user mode threads. Take a look at \"Ring (Computer Security)\" on Wikipedia. On Windows, user mode corresponds to Ring 3, while kernel mode corresponds to Ring 0.\n\n\n What are techniques used for creating kernel threads?\n\n\nThis is extremely dependent upon the operating system.\n\n\n now in case of user level threads Is it that this TCB is created in user's address space ?\n\n\nThe TCB records information about a thread that the kernel uses in running that thread, right? So if it were allocated in user space, the user mode thread could modify or corrupt it, which doesn't seem like a very good idea. So, don't you suppose it's created in kernel space?\n\n\n What are these models? How are these models practically used? \n\n\nWikipedia seems really clear about that.",
2011-02-14T07:41:27
MaHuJa :

Kernel thread means a thread that the kernel is responsible for scheduling. This means, among other things, that the kernel is able to schedule each thread on different cpus/cores at the same time.\n\nHow to use them varies a lot with programming languages and threading APIs, but as a simple illustration,\n\nvoid task_a();\nvoid task_b();\nint main() {\n new_thread(task_a);\n new_thread(task_b);\n // possibly do something else in the main thread\n // wait for the threads to complete their work\n}\n\n\nIn every implementation I am familiar with, the kernel may pause them at any time. (\"pre-emptive\") \n\nUser threads, or \"User scheduled threads\", make the program itself responsible for switching between them. There are many ways of doing this and correspondingly there is a variety of names for them. \n\nOn one end you have \"Green threads\"; basically trying to do the same thing as kernel threads do. Thus you keep all the complications of programming with real threads.\n\nOn the opposite end, you have \"Fibers\", which are required to yield before any other fiber gets run. This means \n\n\nThe fibers are run sequentially. There is no parallell performance gains to be had.\nThe interactions between fibers is very well defined. Other code run only at the exact points you yield. Other code won't be changing variables while you're working on them.\nMost of the low-level complexities programmers struggle with in multithreading, such as cache coherency (looking at MT questions on this site, most people don't get that), are not a factor.\n\n\nAs the simplest example of fibers I can think of:\n\nwhile(tasks_not_done) {\n do_part_of_a();\n do_part_of_b();\n}\n\n\nwhere each does some work, then returns when that part is done. Note that these are done sequentially in the same \"hardware thread\" meaning you do not get a performance increase from parallellism. On the other hand, interactions between them are very well defined, so you don't have race conditions. The actual working of each function can vary. They could also be \"user thread objects\" from some vector/array.",
2011-08-09T14:56:26
EmeryBerger :

Wikipedia has answers to most if not all of these questions.\n\nhttp://en.wikipedia.org/wiki/Thread_(computer_science)\n\nhttp://en.wikipedia.org/wiki/Thread_(computer_science)#Processes.2C_kernel_threads.2C_user_threads.2C_and_fibers",
2011-02-13T16:42:25
yy