memory – DDR4 SODIMM Compatability (Are there different SODIMM variations?)

I am looking to get an 8GB DDR4 SODIMM module to upgrade my laptop’s ram. I looked it up on crucial and the recommended module was this:

As you can see the notch in the pins is in the middle and slightly to the left. However, I have also found slightly cheaper DDR4 SODIMM 8GB modules that run at 2400Mhz instead (which is the original RAM speed), but the notch is in the middle towards the right, which makes me wonder if there will be a compatibility issue when I try to fit the new RAM in.

I can’t find a pin count on the crucial module, though the 2400 is 260 pins on newegg. Are these different variations of the SODIMM format? Or will either work in any SODIMM DDR4 slot? If not, how do I tell which one is correct short of taking my laptop apart and looking? And yes, I know it is slightly preferable to have all the same clock speed across RAM, and that it will only run at the top speed of the slowest stick, but I’m also considering switching all of the RAM out to 2666Mhz so it would be good to know. Thanks!

arch linux – Can not allocate memory on archlinux

I have a C program that uses lots of memory.

I also have AchLinux box with 32 GB RAM.

My program fail to allocate more than ~12 GB RAM, even I start it as root.

I checked ulimit but I did not found anything wrong there.

# ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 124349
max locked memory       (kbytes, -l) 16384
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 124349
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

If I run on my laptop (16 GB RAM), I can not allocate more than ~6 GB RAM. However I did not do lots of research there.

If I run on CentOS, I have no such problem.

c# – In Memory Database in Unit tests, isolate the tests

I have stumbled across these unit tests in a code review that are using in memory db:

private DatabaseContext _context;
private Fixture _fixture;

public void Setup()

    _fixture = new Fixture();
    _fixture.Customize(new AutoNSubstituteCustomization());

    var options = new DbContextOptionsBuilder<DatabaseContext>()
    .UseInMemoryDatabase(databaseName: "testdb")
    _context = new DatabaseContext(options);

public void CleanUp()
    var context = _context;
    if (context == null || context.Database.ProviderName != "Microsoft.EntityFrameworkCore.InMemory")

    _context = null;

#region EmptyDB
public void Test1()
    // Setup
    var logger = _fixture.Freeze<ILogger<UserRepository>>();

    var userRepo = new UserRepository(_context, logger);

    var userViews = new List<UserView>();

    // ACT
    userRepo.UpdateUsers(userViews, CancellationToken.None).GetAwaiter().GetResult();

    // ASSERT
    Assert.AreEqual(10, _context.Users.CountAsync().GetAwaiter().GetResult());

public void Test2()
    // Setup
    var logger = _fixture.Freeze<ILogger<UserRepository>>();

    var userRepo = new UserRepository(_context, logger);

    var identityViews = new List<IdentityView>();
    _fixture.Register<IEnumerable<UserView>>(() =>
        return new UserView() { new UserView("fish") };

    // ACT
    userRep.UpdateUsers(userViews, CancellationToken.None).GetAwaiter().GetResult();

    // ASSERT
    Assert.AreEqual(10, _context.Users.CountAsync().GetAwaiter().GetResult());

As you can see, the tests are using the same in memory db, which I really don’t like. I also don’t like the new UserRepository(_context, logger). Is it a bad practice to use the new-keyword like this?

I would prefer something like this instead:

public void Test1()
    // Setup
    var provider = RegisterServices();  
    var logger = _fixture.Freeze<ILogger<UserRepository>>();
    var userRepo = provider.GetRequiredService<IUserRepository>();

    var userViews = new List<UserView>();

    // ACT
    userRepo.UpdateUsers(userViews, CancellationToken.None).GetAwaiter().GetResult();

    // ASSERT
    Assert.AreEqual(10, _context.Users.CountAsync().GetAwaiter().GetResult());

private ServiceProvider RegisterServices((CallerMemberName) string memberName = "")
    var services = new ServiceCollection();

    services.AddDbContext<IDatabaseContext, DatabaseContext>(options =>


    return services.BuildServiceProvider();

As you can see, I have added a RegisterService method that takes the calling test as a parameter, and then uses this to create the inmemorydb. I really like this because you are isolating your tests more this way. I also think it’s cleaner to read.

How would you guys do in this case? Is the first approach the way to go, or is my approach the more “right” way to do it? Or is it another better and more best practice way to do it?

I just want to know your opinions about this and about the two approaches above.

windows 10 – Google Chrome unloads discrete GPU memory – causes initial lag (around 1-2 seconds) when start scrolling

Ever since I switched chrome graphics settings to gtx1060 from Intel HD 630 on my laptop (in nvidia control panel) I had this lag when trying to scroll page after around 30 seconds of idling.

Win10 task manager shows that lag happens after gpu unloads memory (only chrome is running on discrete gpu). When I start scrolling gpu takes a couple of seconds to start using vram (around 0.1~0.2GB) and only then unfreezes.

Disabling everything in chrome://flags didn’t help.

Turning on high-performance in windows settings didn’t help either.

Running on integrated graphics helps, but it’s a worse option (for a laptop that always works on AC plug).

NVidia driver: 445.87

Here’s my chrome://gpu features:

Graphics Feature Status
Canvas: Hardware accelerated
Flash: Hardware accelerated
Flash Stage3D: Hardware accelerated
Flash Stage3D Baseline profile: Hardware accelerated
Compositing: Hardware accelerated
Multiple Raster Threads: Enabled
Out-of-process Rasterization: Hardware accelerated
OpenGL: Enabled
Hardware Protected Video Decode: Unavailable
Rasterization: Hardware accelerated
Skia Renderer: Enabled
Video Decode: Hardware accelerated
Vulkan: Disabled
WebGL: Hardware accelerated
WebGL2: Hardware accelerated

c++ – Bitmap BMP Reading Question: I need to read (large) 8MB 32BPP BMP Files into Memory so they may be queued for game play

class Surface
virtual ~Surface();

class IMAGE // create a discreet type to be used to instantiate queue template
    // single bitmap image width and height
    unsigned short width, height;
    unsigned int size;      // single bitmap image size
    DWORD* buffer;          // buffer to hold a width X height _size_ image

class Bitmap : protected Surface
class BMP_32
BMPHeader HeaderSection;
BMPInfo InfoSection;
BMPColorTable ColorTableSection;

void Bitmap::BMP_32::Read(Bitmap::BMP_32& a_bmp32, IMAGE& an_image, SURFACE& a_surface, const char* file_name)

fread(&a_bmp32.HeaderSection.FileType, sizeof(a_bmp32.HeaderSection.FileType), 1, fp);

fread(&a_bmp32.InfoSection.HeaderSize, sizeof(a_bmp32.InfoSection.HeaderSize), 1, fp);

fread(&a_bmp32.ColorTableSection.RedIntensity, sizeof(a_bmp32.ColorTableSection.RedIntensity), 1, fp);

an_image.buffer = new DWORD(IMAGEBUFFERSIZE);

// The following three examples render the same output visually:

// DWORD dwords_read = fread(an_image.buffer, sizeof(DWORD), an_image.size, fp); // dwords_read = 2,073,600

DWORD dwords_read = fread(an_image.buffer, (SCREEN_WIDTH * SCREEN_HEIGHT), 4, fp); // dwords_read = 4
// DWORD dwords_read = fread(an_image.buffer, ((SCREEN_WIDTH * SCREEN_HEIGHT) * 2), 2, fp); // dwords_read = 2

Background Detail: I am building up a class framework to read in 32BPP BMP files, that I am creating on a backsurface, which I then bltfast to the frontsurface and flip to GDI. The 32BPP background BMP that I am reading in is 8294400 in size, but the most I am able to read using “fread()” is 2073600, which is exactly 25% of the original artwork. I am using Visual Studio 2019 Community Edition and fread(&MyBuffer, arg2, arg3, stream) extensively. My application is an MFC application.

I have a need to read in the entire BMP Pixel Data Section into memory, because I am saving this into a queue Abstract Data type. Once I have the BMP in memory the I/O is done to disk and I just make calls to update the backsurface, bltfast and flip to update my scene. I am creating a class library for the BMP code.

Question: Is fread() hitting an upperbound on size limit? If so can someone suggest how I might be able to overcome this problem. If so I am hoping to make a minor substitution for this fread() API call for another C++ call. I cannot do anything too fancy with multiple buffers or doing a lot of I/O, as I have tried that approach already without success.

docker – Slab SReclaimable memory can not be recycled?

CentOS Linux release 7.2.1511 (Core)

Linux version 3.10.0-514.26.2.el7.x86_64 ( (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP TueJul 4 15:04:05 UTC 2017


MemTotal:       16267428 kB
MemFree:          237816 kB
MemAvailable:    7501712 kB
Buffers:           18076 kB
Cached:           745340 kB
SwapCached:            0 kB
Active:          5015316 kB
Inactive:         152100 kB
Active(anon):    4404088 kB
Inactive(anon):      972 kB
Active(file):     611228 kB
Inactive(file):   151128 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:              1928 kB
Writeback:             0 kB
AnonPages:       4404052 kB
Mapped:            36320 kB
Shmem:              1008 kB
Slab:           10579260 kB
SReclaimable:    6839864 kB
SUnreclaim:      3739396 kB
KernelStack:       19232 kB
PageTables:        25760 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     8133712 kB
Committed_AS:    7992196 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       94920 kB
VmallocChunk:   34359635708 kB
HardwareCorrupted:     0 kB
AnonHugePages:   2297856 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      161664 kB
DirectMap2M:    10323968 kB
DirectMap1G:     8388608 kB


 Active / Total Objects (% used)    : 18223363 / 42966058 (42.4%)
 Active / Total Slabs (% used)      : 1183671 / 1183671 (100.0%)
 Active / Total Caches (% used)     : 73 / 95 (76.8%)
 Active / Total Size (% used)       : 4513721.33K / 10427564.51K (43.3%)
 Minimum / Average / Maximum Object : 0.01K / 0.24K / 8.00K

6763716 6763212  99%    0.11K 187881       36    751524K sysfs_dir_cache
5604032 949314  16%    0.06K  87563       64    350252K kmalloc-64
4202094  67116   1%    0.04K  41197      102    164788K ext4_extent_status
3893484 2373049  60%    0.19K 185404       21    741616K dentry
3748191 1802716  48%    0.58K 138832       27   2221312K inode_cache
3251724 987321  30%    0.09K  77422       42    309688K kmalloc-96
2611301 1924963  73%    0.57K  93416       28   1494656K radix_tree_node
2590224 764829  29%    0.10K  66416       39    265664K buffer_head
2042465 284009  13%    0.05K  24029       85     96116K shared_policy_node
1802221 613054  34%    1.01K  58287       31   1865184K ext4_inode_cache
1263674 182269  14%    0.31K  50548       25    404384K nf_conntrack_ffffffff81aa0e80
1251789 210295  16%    0.19K  59609       21    238436K kmalloc-192
726016 686721  94%    0.03K   5672      128     22688K kmalloc-32
712992   9160   1%    0.50K  22281       32    356496K kmalloc-512
591360  43401   7%    0.12K  18480       32     73920K kmalloc-128
579564   4356   0%    0.11K  16099       36     64396K jbd2_journal_head
310514   4893   1%    2.00K  19576       16    626432K kmalloc-2048
183680 181248  98%    0.06K   2870       64     11480K kmem_cache_node
181936 180969  99%    0.25K   5686       32     45488K kmem_cache
130254   1632   1%    0.04K   1277      102      5108K Acpi-Namespace
 84512  19793  23%    1.00K   2641       32     84512K kmalloc-1024
 83312   2464   2%    0.25K   2608       32     20864K dquot
 80224  12022  14%    0.25K   2574       32     20592K kmalloc-256
 53538   2009   3%    1.94K   3347       16    107104K TCP
 39490  15690  39%    4.00K   5106        8    163392K kmalloc-4096
 28800    860   2%    1.56K   1440       20     46080K mm_struct
 23808  20992  88%    0.02K     93      256       372K kmalloc-16

To the problem:

I running some docker containers on this host, set mem limit about 13g(5g actually used). I want to start another java process, but killed by oom killer. SReclaimable of Slab can not be freed.

Things I tried

echo 3 > /proc/sys/vm/drop_caches

memory – Is it trivial to protect from double free just by LD_PRELOADing a custom malloc/calloc and free?

Can’t one just implement a malloc/calloc wrapper that adds the returned pointer address to a global hash table prior to returning, and then a free wrapper that checks for the presence of the pointer in the table prior to freeing (returning early if it isn’t present), and then LD_PRELOAD these malloc/calloc and free functions with a program like Firefox, in order to protect from double frees? Is there a reason why the standard malloc/calloc and free functions don’t use such a technique, or why there isn’t a secure variant that is suggested similarly to how strcpy_s is suggested in place of strcpy?

Virtual Memory vs Cache for block identification

Both are based on the principle of locality. Then why virtual memory uses table lookup while cache memory uses associative memory for block identification?

SQL Server: calculating memory requirements for Full db backups

In 2018, we inherited a production SQL Server 2012 FCI running on Windows 2012 with 32 GB RAM. SQL Server max server memory was set at 23.6 GB, and things were running fine.

However, in 2019, we migrated these databases to a SQL Server 2016 FCI. After this migration, our Full backups began intermittently failing due SQL Server restarts. The log seemed to indicate these restarts were due to low memory.

I noticed all of these SQL Server restarts only happened when a full backup was running for our biggest (~80 GB) db. (Incidentally, in case this matters, this db is set to simple recovery model. I have 4 other dbs in full recovery model on this instance: 10 GB, 110 MB, 100 MB, and 50 MB.)

Each time these “low memory restarts” occur, I have been incrementally increasing RAM and max memory. Currently, I’m at 56 GB RAM and max memory is at 45 GB.

From your experience, does it seem unusual for an 80 GB database to require 45 GB max memory during full backups? Can you please share any ideas how I can better identify how much memory my full backup truly needs? Unfortunately, I don’t have a non-production system with similar specs as this one.

memory – How to save browser cookies in ram?

I have a laptop I use in public where I access some sensible information over the browser. I already encrypted it and cookies are automatically deleten when I close the browser. The problem is, if somebody could find out the password, that person could maybe recover the cookies and access my private data. I know the RAM has nothing saved when the laptop is powered off, so it would be a good idea to save my cookies there.

Is there a way to save cookies in the RAM?