1. Computer problem? Tech Support Guy is completely free -- paid for by advertisers and donations. Click here to join today! If you're new to Tech Support Guy, we highly recommend that you visit our Guide for New Members.

Operating system development - some general questions.....

Discussion in 'Software Development' started by SpoonLicker, Feb 5, 2013.

Thread Status:
Not open for further replies.
Advertisement
  1. SpoonLicker

    SpoonLicker Banned Thread Starter

    Joined:
    Feb 5, 2013
    Messages:
    15
    1.If I write a bootloader in Assembly for my microarchitecture (x86), and execute it on a real machine as a self-standing executable code file, does that mean I "own" that code?

    2.Are all 16-bit mode processors limited to the same address bus, or memory accessing limitation of 1 MB at all times? Aside from memory segmentation, is there any native 16-bit processors that have larger address buses without segmentation implementation, and is it even possible? I ask because I want to stay in Real Mode for now.

    3.When writing to hardware, such as sound cards and video memory for screen data, is it the best solution to directly map out, sort, and designate memory and code directly, or access everything through BIOS/UEFI/EFI firmware support, if applicable/possible?

    4.Is it worth the effort planning it all out in crystal clear detail before doing anything, studying every possible outcome and capability, hardware instructions, etc., or should one just "learn as they go" and prepare for trials of failure?

    5.In terms of speed and data processing at the hardware level, would writing every aspect of the kernel, bootloader, device accessing software, graphics libraries, filesystem, user-level software, and rendering in Assembly make a difference in speed than compiling, parsing, and linking a higher-level language? In terms of speed, what measurement of an increase would be realistically useful here, and would it be worth it?
     
  2. loserOlimbs

    loserOlimbs

    Joined:
    Jun 19, 2004
    Messages:
    7,800
    1. Assuming you are not using snippets from others, or using large chunks you did not write of your own design. Then generally yes.
    2. Addressable memory is based on the instruction width of the processor. 2^x where x is the number of bits. 2^16 is 65536 bits, or about 8 MB.
    3. Better in what way? Pure performance, yes. Stability? Only if the hardware will always be the same... Sanity, let a high level OS handle it where applicable.
    4. If you want to really use it, yes. Work out your logic ahead of time so that you can follow a well planned route by just filling out the code.
    5. This is a HUGE question, so I will trim the answer. If we are talking modern x86 / x64 then there is likely not a discernible difference. Saving 100, or 1000 clock cycles each clock cycle is not noticeable on 2+billion cycles (2+GHz processors). Usually the time to develop is more important than the raw speed when you consider this. Microsoft for example has assembly for booting, and basic low level calls, then there are Win32 APIs that are higher level code (C++?) that can call assembly if needed or run what needs to be done. Then they have .NET, C#, XML and a few other technologies that are generally presented to the user. This code is (C#, JAVA are great examples here) is SLOW compared to assembly, but applications can be written in hours/ days instead of months/ years. Also, the applications can be managed, the code is safer because only predefined things can be done directly or indirectly. The slowness of these comes from being JIT compiled to bytecode, but again the speed difference is only absolute, it is not really "feel-able" for the most part.

    The most frequent measurement is a loop doing some simple math and a stopwatch basically.

    Start:
    Get time
    loop 100,000
    i++
    r=r+i+5
    end loop
    get time again
    2nd time - first time = time to run it all.
     
  3. SpoonLicker

    SpoonLicker Banned Thread Starter

    Joined:
    Feb 5, 2013
    Messages:
    15
    I don't see what you're getting at by giving me a pseudo instruction clock pipeline of a microprocessor....
     
  4. loserOlimbs

    loserOlimbs

    Joined:
    Jun 19, 2004
    Messages:
    7,800
  5. SpoonLicker

    SpoonLicker Banned Thread Starter

    Joined:
    Feb 5, 2013
    Messages:
    15
    Isn't it just easier, ultimately, to do everything in Assembly? There's a burden with high-level compiler optimization, and since most compilers assume code is expected to run on an OS (and not be an OS itself), closer interaction with the architecture would make the process easier than compiling and linking (linker script likely needed) a high-level language in to a flat binary for every single source file. Plus, there's added overhead, since most high-level languages don't have support for all registers, and code must consist of only native language instructions(e.g. No standard libraries, unless you port), if you did everything in Assembly everything would be there at your grasp.

    And to add, since you're writing from basically nothing to get an entire system of software to interact as direct with the hardware as possible, and above all else possible, wouldn't knowing and being able to directly interface and work with the processor more natively be essential to not only speed but consistent execution accuracy in general?

    I hear far more errors in OS development coming from those using higher-level languages than Assembly in general, possibly because of the overhead and optimization needed.

    Suffice it to say, higher-level languages make it easier to code, but often times harder to properly optimize and execute.
     
  6. Baltio_Orange

    Baltio_Orange Banned

    Joined:
    Feb 11, 2013
    Messages:
    4
    One of the problems with doing everything in Assembly is consistency.

    When doing everything in Assembly you really have to do everything basically.

    Also, lack of any usable data types, as in Assembly you work with bytes and bits, not int and char. You would have to define bytes, move bytes, test bytes, compare bytes, etc.

    In a language like C you just define a data type, do what you want with it, and it's done; there's no dealing with registers, the stack, moving the data around. Not to mention Assembly is more poor code-readability, unless you're experienced with it, and comment as necessary to help jog your memory.

    For example, which of these sounds easier?

    Code:
    // In C
    int x = 2;
    int y = 2;
    if(x > y)
    {
    GoToThisFunction();
    }
    // In x86 Assembly
    mov al, 0x02
    mov bl, 0x02
    cmp al, bl
    jl GoToThisFunction
    
    If you are experienced in Assembly there may be no discontent feelings here, but to a regular programmer, and in a more sparse series of instructions necessary for an entire OS, adding some C or the like certainly wouldn't hurt at all, and can reduce some pain in some extreme situations where Assembly would be extremely difficult to implement.
     
  7. SpoonLicker

    SpoonLicker Banned Thread Starter

    Joined:
    Feb 5, 2013
    Messages:
    15
    Good observation, Baltio_Orange.

    However, I don't think Assembly would be virtually any different to someone who knows hardware and computer science stronger than your average programmer may.

    That's not to say that an Android developer would get lost, but app developers interface with hardware through high-level bytecode and libraries that call drivers on Linux to do the work.

    iOS does the same thing, but the difference being that Apple doesn't use process virtual machines like Android-Linux does.

    I think that a low-level programmer who truly understands enough of the architecture and computer assembly in general would certainly find Assembly coding not too much more difficult than C, and for the cost of speed in the long run I can't imagine why someone so knowledgeable/capable would give up clock cycles for easier code syntax.
     
  8. loserOlimbs

    loserOlimbs

    Joined:
    Jun 19, 2004
    Messages:
    7,800
    Because the difference in speed is perceivable to anything but a machine with today's hardware, for most applications. Its hard to justify dev time being x time longer, production cost being y more expensive, and tracing bugs at z times harder for a speed difference almost nothing would appreciate.

    Could you make the case, maybe for something like a watch, robot, where clock speeds are low, as well as storage. But when there are terabyte drives, and 8 core 4GHz processors its hard to say "Lets hire 2 dozen highly specialized assembly guys who charge more, instead of the 10 good C++ guys for less each; no one will ever know the difference anyway!"
     
  9. Baltio_Orange

    Baltio_Orange Banned

    Joined:
    Feb 11, 2013
    Messages:
    4
    I think Spoonlicker wants to save her clock cycles in an indie project. This seems to be more of a hobby than professional, although hobbyist operating systems are potentially professional. One person can write a mini operating system; I have done it before, but not entirely in Assembly.
     
  10. Sponsor

As Seen On
As Seen On...

Welcome to Tech Support Guy!

Are you looking for the solution to your computer problem? Join our site today to ask your question. This site is completely free -- paid for by advertisers and donations.

If you're not already familiar with forums, watch our Welcome Guide to get started.

Join over 733,556 other people just like you!

Loading...
Similar Threads - Operating system development
  1. alanh01
    Replies:
    2
    Views:
    266
  2. Techrajxx
    Replies:
    1
    Views:
    392
Thread Status:
Not open for further replies.

Short URL to this thread: https://techguy.org/1088339

  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice