how to simply monitor (or tail) several log files from several remote machines

A background:

  • I have a product that’s installed on labs composed of several machine (some of them 3 and some 8 VMs) of Windows server 2016 and up and Windows 10.
  • The product creates several log files (*.log) with different name and purpose in each machine.
  • I think that those log files are created by the log4net feature…
  • There are 5 services to follow up: IIS, SQL, RabbitMQ, Product’s service
  • I cannot UNC (like: server-namelogsproduct.log) to those machines.
  • Currently If I want to monitor a log file I need to RDP to each machine and run the following PowerShell script line: Get-Content C:ProductLogsProduct.log -wait -tail 1000

The question:

  1. Is there a free simple tool script that can connect to those secured lab (SSL, TLS) and show those log files in real time in my host? (I already review baretail, WinTail, I didn’t understand if promatheus, grafana and other tools can be suitable to my needs…)
  2. Is there a free simple tool script that can connect to those secured lab (SSL, TLS) and show those services event log in real time in my host?
  3. How can I use the PowerShell script line to monitor the log files from all the Lab’s machines? (already tried to Invoke-Command to several machines but my terminal became a jungle…)

python – How to split one list item into two (log files)

What I have now
['DRR2018-05-24', '12:14:12.054n']

What I would to have
['DRR', '2018-05-24', '12:14:12.054n']

Working with log files and in other log files I have element like this 'DRR' And for navigate or sorted want to split one list item and use this abbreviation

ag.algebraic geometry – Log canonical surface with an elliptic singularity

I would like to know if there is an example as follows:

$X$ is a log canonical surface and $x in X$ is an elliptic singularity such that

  1. The minimal resolution of $x$ is a circle of rational curves (or a single nodal rational curve).
  2. The singularity $x$ is non-$mathbb{Q}$-factorial.

I think it looks reasonable, but I do not know any explicit example of such a surface. I searched some papers but none of them discuss the $mathbb{Q}$-factoriality.

.net – Is Output Neutralization required when logging C# exception messages to log files?

Many things in the security field are relative, means they depend on context.

In your case the exception may contain some sensitive data. Only you can decide if this is a weakness or not. A few examples:

  • If the exception text contains person names, addresses, account numbers, this may be a security issue in some cases. Normally we don’t want to have such data in the logs.
  • If the exception text contains generic statement like “User A has no permission for operation B”, this is usually a safe text.
  • If the exception text is technical like “NullPointerException”, it is safe.

Also the exception text can contain stack trace. In some cases disclosing it may be a weakness. For instance, if it contains information about classes and line numbers, it can be possible to find out what version of what library is used. If there is a known security issue in this library version, this can be used for an attack. Again, even if there is such a bug, it can be that exploit requires very specific preconditions and may be you are safe in your case.

Consider such findings not as a problem, but as a hint that there may be a problem. Analyze it, estimate the risks and decide, if the risks are acceptable.

sql server – A fatal PeopleCode SQL error occurred. Please consult your system log for details

sql server – A fatal PeopleCode SQL error occurred. Please consult your system log for details – Database Administrators Stack Exchange

How to compute annual log mean an variance of a portfolio?

If I have data of company’s mean daily log revenue ranging over a year and I’d like to estimate the log revenue and variance of the profit for the said company, how can I calculate it?

server – Parse Back4App database as a master database change log rather than actual database- thoughts on this design principle?

first post here. I’m looking for some general input as to the correctness of what I’m trying to do here. Is this a good idea, bad idea, what could be plus points or negative aspects to this design?

What I’m envisioning is using the Parse Back4App database as a “master database event log” which would act as a master database change log for any user accounts connected to one database. Full databases are stored with Hive on client devices. Any changes made are pushed to the server as log events. These will be read by client devices as instructions for any local database changes that need to be made so all databases stay in sync. This will be a last to write wins design. This database would be auto-deleting for data older than say every 3 months to ensure all users are updated and so that the data transfer doesn’t get too large.

My code is a bit messy as I haven’t refactored too much so please bear with that. I’m simply looking for advice as to if this is a good idea or not, and any other advice as per first paragraph.

Secondly (possibly this part belongs in SO) but if anyone wants to make general suggestions for my encryption issue please see the readMe file under MAJOR ISSUE on GitHub.

Example database log with create & delete actions logged.
Example database log

FULL PROJECT

Book model

import 'package:hive/hive.dart';
part 'book_model.g.dart';

@HiveType(typeId: 0)
class Book {
  @HiveField(0)
  String title;

  @HiveField(1)
  String author;

  @HiveField(2)
  DateTime publishingDate;

  @HiveField(3)
  DateTime dateAdded;

  @HiveField(4)
  DateTime lastModified;

  Book({
    this.title,
    this.author,
    this.publishingDate,
    this.dateAdded,
    this.lastModified,
  });

  @override
  String toString() {
  return '''
  title: $title
  author: $author
  publishingDate: $publishingDate
  dateAdded: $dateAdded
  lastModified $lastModified
  ''';
   }

  Book.fromJson(Map<String, dynamic> json)
      : title = json('title'),
        author = json('author'),
        publishingDate = json('publishingDate'),
        dateAdded = json('dateAdded'),
        lastModified = json('lastModified');

  Map<String, dynamic> toJson() => {
        'title': title,
        'author': author,
        'publishingDate': publishingDate,
        'dateAdded': dateAdded,
        'lastModified': lastModified
      };
}

Database sync item model

import 'book_model.dart';
import 'package:hive/hive.dart';
part 'database_sync_model.g.dart';

@HiveType(typeId: 1)
class DatabaseSyncItem {
  @HiveField(0)
  Book previousBookValue;

  @HiveField(1)
  Book updatedBookValue;

  @HiveField(2)
  DateTime dateAdded;

  @HiveField(3)
  DateTime lastModified;

  @HiveField(4)
  DatabaseAction entryAction;

  DatabaseSyncItem({
    this.previousBookValue,
    this.updatedBookValue,
    this.dateAdded,
    this.lastModified,
    this.entryAction,
  });

  @override
  String toString() {
  return '''
  previousValue: $previousBookValue
  updatedValue: $updatedBookValue
  dateAdded: $dateAdded
  lastModified: $lastModified
  entryAction: $entryAction
  ''';
   }

  // Turn json back into data model
  DatabaseSyncItem.fromJson(Map<String, dynamic> json)
      : previousBookValue = json('previousBookValue'),
        updatedBookValue = json('updatedBookValue'),
        dateAdded = json('dateAdded'),
        lastModified = json('lastModified'),
        entryAction = json('entryAction');

  // Turn data model into json
  Map<String, dynamic> toJson() => {
        'previousBookValue': previousBookValue,
        'updatedBookValue': updatedBookValue,
        'dateAdded': dateAdded,
        'lastModified': lastModified,
        'entryAction': entryAction,
      };
}

enum DatabaseAction {
  create,
  update,
  delete,
}

Local database services

import 'package:hive/hive.dart';
import 'package:service_database_sync/models/book_model.dart';
import 'package:service_database_sync/services/server_database_services.dart';

class ClientDatabaseServices {
  final String hiveBox = 'book_box';
  List<Book> _bookList = ();
  Book _activeBook;
  ServerDatabaseServices parseAction = ServerDatabaseServices();

  ///
  /// CREATE EVENT
  ///
  // Add book to database & update list
  Future<void> addBook(Book newBook) async {
    var box = await Hive.openBox<Book>(hiveBox);
    await box.add(newBook);
    _bookList = box.values.toList();
    await parseAction.logCreateEvent(newBook);
  }

  ///
  /// READ EVENTS
  ///
  // Send database items to list
  Future<void> _databaseToRepository() async {
    var box = await Hive.openBox<Book>(hiveBox);
    _bookList = box.values.toList();
  }

  // Return list for use
  Future<List<Book>> getBookList() async {
    await _databaseToRepository();
    return _bookList;
  }

  // Getter for list
  List<Book> get bookList => _bookList;

  // Return specific book
  Book getBook(index) {
    return _bookList(index);
  }

  // Return list length
  int get bookCount {
    return _bookList.length;
  }

  // Get active book
  Book getActiveBook() {
    return _activeBook;
  }

  // Set active book
  void setActiveBook(key) async {
    var box = await Hive.openBox<Book>(hiveBox);
    _activeBook = box.get(key);
  }

  ///
  /// UPDATE EVENT
  ///
  // Updates specific book with new data
  void editBook({Book book, int bookKey}) async {
    var box = await Hive.openBox<Book>(hiveBox);
    await box.put(bookKey, book);
    _bookList = box.values.toList();
    _activeBook = box.get(bookKey);
  }

  ///
  /// DELETE EVENTS
  ///
  // Deletes specific book and updates list
  Future<void> deleteBook(key) async {
    var box = await Hive.openBox<Book>(hiveBox);
    await parseAction.logDeleteEvent(box.getAt(key));
    await box.deleteAt(key);
    _bookList = box.values.toList();
  }

  // Empties hive box for database reset
  Future<void> deleteAll() async {
    var box = await Hive.openBox<Book>(hiveBox);
    await box.clear();
  }
}

Server database services

import 'package:parse_server_sdk/parse_server_sdk.dart';
import 'package:service_database_sync/models/book_model.dart';
import 'package:service_database_sync/models/database_sync_model.dart';

class ServerDatabaseServices {
  // Server app keys & data
  final keyApplicationId = 'jVrkUb6tvSheT4NHqGuF9FtFDtQkmqS3pJbKRyLN';
  final keyClientKey = 'MFYPnwLM1d38TtG2523YXxMQ4lCZdX9maovSjrdu';
  final keyParseServerUrl = 'https://parseapi.back4app.com';

  // String values
  String previousBookValue = 'previousBookValue';
  String updatedBookValue = 'updatedBookValue';
  String dateAdded = 'dateAdded';
  String entryAction = 'entryAction';
  String lastModified = 'lastModified';

  ///
  ///
  /// CREATION LOG EVENT
  Future<void> logCreateEvent(Book book) async {
    final createEvent = DatabaseSyncItem(
      previousBookValue: null,
      updatedBookValue: book,
      dateAdded: book.dateAdded,
      entryAction: DatabaseAction.create,
      lastModified: DateTime.now(),
    );
    final toServer = ParseObject('Event')
      ..set(previousBookValue, createEvent.previousBookValue)
      ..set(updatedBookValue, createEvent.updatedBookValue.toJson())
      ..set(dateAdded, createEvent.dateAdded)
      ..set(entryAction, createEvent.entryAction.toString())
      ..set(lastModified, createEvent.lastModified);
    await toServer.save();
  }

  ///
  ///
  /// UPDATE LOG EVENT
  Future<void> logEditEvent(Book previousValue, Book updatedValue) async {
    updatedValue.lastModified = DateTime.now();
    final editEvent = DatabaseSyncItem(
      previousBookValue: previousValue,
      updatedBookValue: updatedValue,
      dateAdded: previousValue.dateAdded,
      entryAction: DatabaseAction.update,
      lastModified: DateTime.now(),
    );
    final toServer = ParseObject('Event')
      ..set(previousBookValue, editEvent.previousBookValue.toJson())
      ..set(updatedBookValue, editEvent.updatedBookValue.toJson())
      ..set(dateAdded, editEvent.dateAdded)
      ..set(entryAction, editEvent.entryAction.toString())
      ..set(lastModified, editEvent.lastModified);
    await toServer.save();
  }

  ///
  ///
  /// DELETE LOG EVENT
  Future<void> logDeleteEvent(Book book) async {
    book.lastModified = DateTime.now();
    final deleteEvent = DatabaseSyncItem(
      previousBookValue: book,
      updatedBookValue: null,
      dateAdded: book.dateAdded,
      entryAction: DatabaseAction.delete,
      lastModified: DateTime.now(),
    );
    final toServer = ParseObject('Event')
      ..set(previousBookValue, deleteEvent.previousBookValue.toJson())
      ..set(updatedBookValue, deleteEvent.updatedBookValue)
      ..set(dateAdded, deleteEvent.dateAdded)
      ..set(entryAction, deleteEvent.entryAction.toString())
      ..set(lastModified, deleteEvent.lastModified);
      await toServer.save();
  }
}

Main

import 'dart:io';
import 'package:hive/hive.dart';
import 'package:parse_server_sdk/parse_server_sdk.dart';
import 'package:service_database_sync/data/books_hardcoded.dart';
import 'package:service_database_sync/models/book_model.dart';
import 'package:service_database_sync/services/client_database_services.dart';
import 'package:service_database_sync/services/server_database_services.dart';

Future<void> main(List<String> arguments) async {
  Hive.init('hive_database');
  Hive.registerAdapter(BookAdapter());
  await Parse().initialize(
    ServerDatabaseServices().keyApplicationId,
    ServerDatabaseServices().keyParseServerUrl,
    clientKey: ServerDatabaseServices().keyClientKey,
    debug: true,
  );

  await addBooksToLibrary();
  sleep(Duration(seconds: 5));
  await deleteSpicificBook(2);
}

Future<void> addBooksToLibrary() async {
  final bookDatabaseActions = ClientDatabaseServices();
  await bookDatabaseActions.addBook(bookOne);
  await bookDatabaseActions.addBook(bookTwo);
  await bookDatabaseActions.addBook(bookThree);
  await bookDatabaseActions.addBook(bookFour);
}

Future<void> deleteSpicificBook(int index) async {
  final bookDatabaseActions = ClientDatabaseServices();
  await bookDatabaseActions.deleteBook(index);
}

Future<void> deleteAllBooks() async {
  final bookDatabaseActions = ClientDatabaseServices();
  await bookDatabaseActions.deleteAll();
}

void printBookLibrary() async {
  final bookDatabaseActions = ClientDatabaseServices();
  final box = await Hive.openBox<Book>(bookDatabaseActions.hiveBox);
  final books = box.values;
  print(books);
}

log messages – Incorporating Drupal 8 logs into Splunk

In Drupal 8 you may define your own logger implementation to do whatever you want with log messages. The default logger provided by core Drupal saves these messages in the Drupal database and makes them available in the UI at /admin/reports/dblog. This default logger is implemented in the core dblog module by the logger class DrupaldblogLoggerDbLog, and that’s a great example to use when you write your own.

A logger is a class that implements LoggerInterface and is used as a service with the ‘logger’ service tag. All registered loggers are used for every channel. You don’t need to ‘trigger’ a logger or do anything special to have that logger automatically used by core Drupal, other than tagging the service.

The service definition for the core DbLog logger looks like this (from dblog.services.yml):

services:
  logger.dblog:
    class: DrupaldblogLoggerDbLog
    arguments: ('@database', '@logger.log_message_parser')
    tags:
      - { name: logger }
      - { name: backend_overridable }

You can see that the logger.dblog service is implemented by the DbLog class, and this service is tagged as a logger. That tag is how Drupal knows to send log message to this service. Without the tag, Drupal wouldn’t know this service was a destination for log messages.

Another good example in core is the core syslog module, which provides a logger that uses the PHP syslog() function to send messages to an operating-system-dependent location (probably a flat text file shared by all other programs that log messages on that operating system).

To make your own ‘Splunk’ logger, you would create a module that defines a logger service, for example logger.splunk. (Services are defined in the module’s <modulename>.services.yml file.) You must have a class, for example ‘Splunk’ that implements LoggerInterface. In that ‘Splunk’ class you may use the Splunk API to send log messages to Splunk. The details of how to do that is up to you as it has nothing to do with Drupal at this point. If you have code that sends log messages to Splunk from a standalone program then I think it is pretty clear from examples provided by core how to use that code in your logger class.

Your module will consist of a <modulename>.info.yml file, a <modulename>.services.yml file, and a <modulename>/src/Logger/Splunk.php file. Nothing more is needed. If you’ve done it right, then when you enable your module all messages should be logged to Splunk.

data recovery – Formatted a partition in windows 10, can’t get log in screen in Linux Mint 19

Issue: Formatted an NTFS drive in windows that was created in Linux. Now can only log into Linux in emergency mode and could not see my home folder, sub folders or files in them. I would like to recover the installation ideally or atleast recover home folder. I can log into Windows and work normally.

I have a duel boot T480 laptop with preinstalled Windows 10. Once I bought it, I resized the C: drive, using gParted I created additional partition for storage (D:, 50gb and F:, 100gb) and other ext4 partitions and installed Linux Mint 19. I’ve been using this setup for last year or so. D: was not being used since its creation using gParted.

Few days ago, while I was logged into windows10, I formatted the unused D: drive. The first clue something was wrong was when the partition size was 175gb instead of 50gb that I created. I created one empty folder from windows after I formatted the partition. After this process this is how my partitions look like.https://imgur.com/pGuTlot. I dont see the windows recovery partition etc.

I tried to log into Mint, I got the grub menu, selected Mint, all I got was a menu saying “You are in emergency mode” message. https://imgur.com/7a1MpTs I could log in as root though. If I restart, I can still log into Windows10 and work normally. I have some files in the home folder that I would like to recover, along with my browser shortcuts.

What I tried:
I have an installation USB and I can log into the machine using that USB.
I can log into Windows10 and work nromally.
Tried testdisk:
This is what I got after first https://imgur.com/lMdwuGu.
The next step was quicksearch. This is what I got after quicksearch https://imgur.com/EpGI1nB. There are many partitions marked with “D”
I tried finding files using option “p”, I don’t see the files that I’m looking for (my home directory or sub folders like desktop etc).

Test disk output after analysis

test disk output after analysis

green checks are windows partitions that are ok, yellow are linux root, red crosses have error messages as "disk damaged"

Current state of partitions

RESOURCE_SEMAPHORE wait during Full and transaction log backup done by Ola Hallengren’s solution

We have an MSSQL instance 2017 on version 14.0.3370 and there are issues with RESOURCE_SEMAPHORE waits when full or transaction backup job runs. DBs hosts over 9000 small dbs with used space up to approx. 10 MBs or so and dbs are not used so much. Maximum memory for the instance is 28 GB. Requested_memory_KB for running dbo.Database SP is around 4,5 GB and it is even when tr. log backup runs for system dbs.

EXECUTE dbo.DatabaseBackup
@Databases = 'SYSTEM_DATABASES',
@Directory = '',
@BackupType = 'LOG',
@Verify = 'Y',
@CheckSum = 'Y',
@LogToTable = 'Y',
@CleanupMode = 'BEFORE_BACKUP',
@ChangeBackupType = 'Y',
@CleanupTime = 156

I’ve used a cursor for backing up of tr. logs and full backups. New dbs are created daily, so using MS maintenance plan is not an option, because I haven’t find any possibility how to change backup type when new dbs are created and tr. log backup will fail due to missing full backup of new dbs.

The same problem with resource_semaphore wait is from time to time for index maintenance and update statistics jobs with and I don’t want to replace Ola’s maintenance plans, because we used those as a default.

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123