How to prepare the escalation of lab privileges from the buffer overflow under Kali Linux?

I've compiled a vulnerable application on my Kali Linux 2019.2, giving the application permissions (-r-sr-x —).
Then I used my payload with a non-root user and got a shell, but whoami showed me that I'm not root, I'm still the same user.

Could someone give me some advice? I can not understand what I miss. I tried to find a guide, but without luck.

SQL Server 2017 – Large Amount in the Buffer Pool Database Pages

We were looking at sys.dm_os_buffer_descriptors using the query below

select 
 d.(name)                                       (Database_Name),
 (count(file_id) * 8) / 1024                    (Buffer_Pool_Size_MB),
 sum(cast(free_space_in_bytes as bigint)) / 1024 / 1024     (Free_Space_MB)
from sys.dm_os_buffer_descriptors b
    join sys.databases d on
        b.database_id = d.database_id
group by d.(name)
order by (Buffer_Pool_Size_MB)

And for one of my databases, this indicates (Buffer_Pool_Size_MB) = 77325 megabytes and (Free_Space_MB) = 15849 megabytes

So, about 20% of its space in the pages of the buffer pool is empty. It looks like a waste of resources

Questions:

  • Is it a problem ?
  • How many free_space_in_bytes to be mitigated?
  • Are there other things to consider / consider in our situation?

javascript – node.js – The buffer with & # 39; space.repeat & # 39; does it work the same way on all browsers

I'm trying to return the string below the Node.js response object to the React application with some predefined spaces manually added between buffer variables in this way,

const space = ' ';
const nextLine = 'rn';

const nodeResp = Buffer.concat((
    Buffer.from(` This ${nextLine}`),
    Buffer.from(` is ${nextLine} `),
    Buffer.from(` a ${nextLine}`),
    Buffer.from(`Sample ${space.repeat(84)} : yess ${nextLine}`),
    Buffer.from(`React ${space.repeat(85)} : nooo ${nextLine}`),
    Buffer.from(`App ${space.repeat(87)} : yess ${nextLine}`),
    Buffer.from('From Node.')));

  res.set('Content-Type', 'text/plain').send(nodeResp);

The methods' space.repeat & # 39; will they return the same space, through the navigators and through the resolutions?

I mean that the output displayed from my code above will be the same way on all resolutions / browsers or that my result will be in different spaces in different operating environments / systems.

I'm learning the knot js and I need help in this regard.

Can someone guide me in this matter?

Thank you in advance.

c ++ – producer consumer with threads and using boot ring buffer

I have two sons, one is the producer and the other is the consumer. My client is always late (due to an expensive function call, simulated in the code below using sleeps). I've used the ringtone buffer because I can afford to lose some events.

I am looking to see if my lock looks good and reviews general c ++ review.

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

std::atomic mRunning;
std::mutex m_mutex;
std::condition_variable m_condVar;

class VecBuf {
    private:
    std::vector vec;

    public:
    VecBuf() = default;
    VecBuf(std::vector v)
    {
        vec = v;
    }
};

std::vector data{ 10, 20, 30 };

class Detacher {
    public:
    template
    void createTask(Function &&func, Args&& ... args) {
        m_threads.emplace_back(std::forward(func), std::forward(args)...);
    }

    Detacher() = default;
    Detacher(const Detacher&) = delete;
    Detacher & operator=(const Detacher&) = delete;
    Detacher(Detacher&&) = default;
    Detacher& operator=(Detacher&&) = default;

    ~Detacher() {
        for (auto& thread : m_threads) {
            thread.join();
        }
    }

    private:
    std::vector m_threads;
};

void foo_1(boost::circular_buffer *cb)
{
    while (mRunning) {
        std::unique_lock mlock(m_mutex);

        m_condVar.wait(mlock, (=)() { return !cb->empty(); });

        VecBuf local_data(cb->front());
        cb->pop_front();
        mlock.unlock();
        if (!mRunning) {
            break;
        }
        //simulate time consuming function call and consume local_data here
        std::this_thread::sleep_for(std::chrono::milliseconds(16));
    }

    while (cb->size()) {
        VecBuf local_data(cb->front());
        cb->pop_front();
        if (!mRunning) {
            break;
        }
    }
}

void foo_2(boost::circular_buffer *cb)
{
    while (mRunning) {
        std::unique_lock mlock(m_mutex);

        while (cb->full()) {
            mlock.unlock();
            /* can we do better than this? */
            std::this_thread::sleep_for(std::chrono::milliseconds(100));
            mlock.lock();
        }
        cb->push_back(VecBuf(data));
        m_condVar.notify_one();
    }
}

int main()
{
    mRunning = true;
    boost::circular_buffer cb(100);
    Detacher thread_1;
    thread_1.createTask(foo_1, &cb);
    Detacher thread_2;
    thread_2.createTask(foo_2, &cb);
    std::this_thread::sleep_for(std::chrono::milliseconds(20000));
    mRunning = false;
}

Computer Architecture – Is it Reasonable to Model in a DRAM the Row Stores Corresponding to the Same Bank Identifier as the Same Row Buffer?

I create a simple line buffer simulator that goes with a simple cache simulator to count hits and misses in the line buffer. Whenever a cache block is not in the cache, I want to look for it in the buffers of the main memory and indicate whether it is present or not.

What would be the precision of having a single long "row buffer" structure containing all the data found in all the individual row buffers of each corresponding bank in each DRAM chip? Let's say that each chip has 8 banks, so I should create 8 of these extra long line buffers to simulate these chips. This idea is based on the idea that all these chips work in unison, so if I want to load the cache block at address 0, the 0 bank buffers in each chip will fill in data from addresses beginning at address 0 in the chip. 0 and ending at address 0 + (line buffer length * number of DRAM chips) in the last chip. This would imply interleaved address mappings of lines (with consecutive lines in consecutive banks) for the sake of simplicity.

Are there any major misunderstandings about the functioning of the DRAM that make it a very bad way to model the behavior of the line buffer or is it a reasonable simplification? I would also like to point out that the main goal is simplicity.

ffmpeg – Cache or buffer reservation to resolve inconsistent broadcast speeds during live streaming

I have a pretty good broadband internet connection and a speed test indicates that the download and upload speeds are 2Mbps and 1Mbps. I can usually watch YouTube videos without buffering issues.

I use the connection for streaming audio on YouTube with OBS Studio (or ffmpeg) at 400kps. However, continuous flow seems to fluctuate between 0 and 500 kbps over time, probably because of Incoherent Internet speed. I have tested with ffmpeg with a buffer size of about 50M but the problem persists. How can I cache or buffer reserved with ffmpeg to fix this problem? Thank you.

PDF JavaScript buffer issues

I have found an official PDF BlueBeam stamp that accomplishes what I want, and I have reproduced the code in my buffer, but it still does not work. So, it's clearly something that I have misconfigured but I have no idea. You will find attached the two stamps, the JavaScript example being the BlueBeam stamp that works and the other, mine that does not work. If someone who is more knowledgeable about JavaScript could take a look and help me identify the problem, I would be extremely grateful. Thank you!

My stamp: …

PDF JavaScript buffer issues

2 45

BlackHatKings: cryptographic speculation and investment
Posted by: Quintondum
Post time: June 17, 2019 at 06:03.

linux – Simple buffer overflow trying to flee the system () address

The code is:

                #understand 

void vuln (char * arg) {
tank buffer[10];
strcpy (buffer, arg);
}

int main (int argc, char ** argv) {
vuln (argv[1])
returns 0;
}

I determined that I could enter a 26-character buffer in total to overwrite EIP. In gdb the address of

system (): 0xb7e41b40
$ SHELL's "/ bin / bash" part: 0xbffffdac 

if I run

run $ (python -c "print (& # 39; * 22) + & # 39;  x40  x1b  xe4  xb7 + FAKE & # 39; + & # 39;  xac  xfd  xff  xbf & # 39; ")

in gdb he will drop me in a shell. Unfortunately, this does not work in the shell because of ASLR. GDB tells me that strcpy is at 0xb7e7c750 with an offset of 0x3AC10 compared to system (). What I think is that if I can somehow store the strcpy address at run time and use the offset (and the offset of the string), I can do it work with the system ("/ bin / bash"), which is basically what i do in gdb without the ASLR.

Any advice … or direction regarding the applicable documentation would be greatly appreciated. I'm trying to successfully exploit the bug without disabling any security measures.

Currently, there are ASLR, PIE, ASCII Armor, NX (I suppose) and SSP (this is also assumed). I am on Debian 9. Thank you!

Check point / WAL buffer empty time during serialization isolation

I have a doubt regarding the time of dump / check WAL buffers when serializing isolation in Postgres. Is it possible to lose recognized writes even if the transaction executes at the serialization isolation level? What happens if the transaction has been validated in the WAL buffer and the machine subsequently hangs without the possibility of recovery? Does Postgres still validate serialization isolation transactions on disk?