Calling all Sovietologists …

Link: https://www.zerohedge.com/technology/mother-sues-ai-firm-and-google-alleging-chatbot-drove-teen-suicide

  • AI’s implement synthetic scripts
  • AI’s, as of this moment, show ZERO evidence of “awareness”
  • It is UNLIKELY that an AI could drive someone to suicide UNLESS that person is close to doing it AND feeding the AI (GIGO) with their own projected depressed sadness

An AI might be used as a trolling process, but it’s entirely likely that ONCE the kids are engaged, the chat sessions are handed off to EVIL pieces of shit that pay for the privilege to do THIS and worse to kids.

AI’s can’t do this, and there is more to this story … I think …

Process: FT8 file from WSJT-X

import re
import maidenhead as mh

fl = "C:\\Users\\danielsullivan\\AppData\\Local\\WSJT-X\\ALL.TXT"
fw = "callsign_locations.txt"

lines = None

with open(fl, "r") as rf:
lines = rf.readlines()

with open(fw, "w") as wf:
wf.write("CALLSIGN")
wf.write('\t')
wf.write("DATE")
wf.write('\t')
wf.write("FREQ")
wf.write('\t')
wf.write("SNR")
wf.write('\t')
wf.write("DRIFT_SECONDS")
wf.write('\t')
wf.write("GRID")
wf.write('\t')
wf.write("LAT")
wf.write('\t')
wf.write("LNG")
wf.write('\n')
for ln in lines:
utcd = ln[0:17].strip()

pts = utcd.split('_')

yr = pts[0][0:2]
mn = pts[0][2:4]
dy = pts[0][4:6]

dts = mn + "/" + dy + "/" + yr

frq = ln[17:24].strip()
snr = ln[30:38].strip()
timedrift = ln[37:43].strip()
msg = ln[47:].strip()
grd = ""
lat = None
lng = None
mparts = msg.split(' ')
if len(mparts) == 3:
chk = mparts[2].strip()
if len(chk) == 4:
if chk[0].isalpha() and \
chk[1].isalpha() and \
chk[2].isdigit() and \
chk[3].isdigit():
print(chk)
ll = mh.to_location(chk)
grd = chk
lat = ll[0]
lng = ll[1]
wf.write(mparts[1])
wf.write('\t')
wf.write(dts)
wf.write('\t')
wf.write(frq)
wf.write('\t')
wf.write(snr)
wf.write('\t')
wf.write(timedrift)
wf.write('\t')
wf.write(grd)
wf.write('\t')
wf.write(str(lat))
wf.write('\t')
wf.write(str(lng))
wf.write('\n')

Your own daily dose … (of the news)

  1. Below is a general recipe for experimenting with RSS feeds AND speech synthesizers.
  2. For the speech synthesis there are two scripts, very similar, one will work with ESPEAK (free open source), the other works with Microsoft SAPI.
  3. In order to run these scripts you will need MYSQL installed. You will need a minimum level of understanding of how MYSQL works. You can easily translate the database piece to ODBC, and the rest to PowerShell or whatever. That’s your business, not mine.
  4. Once you’ve installed MYSQL and the server is running, create a database called “NEWS”: create database NEWS;
  5. After you’ve created the NEWS database, using the CLI (command line interface) as above, type command: use NEWS;
  6. Once you are in the NEWS database, copy and paste the entire script below into the CLI or save as text file and consult from the CLI using the command: source rss.sql (assuming you stored the create table text below in that file)
  7. In the example I’m using the ROOT database, why? – because IDGAF. But best practice is to create special database users with limited permissions. If you’ve installed your MYSQL database without granting permission to external (port) connections? – then it’s not a concern.
  8. Running the aggregator might trigger a site to block you or even your own network. This behavior, which was innocuous 20 years ago, is now attacked and classified as an aggressive network behavior. Just be careful.
  9. After you’ve run the aggregation script (and the script can be run by CRON or Task Manager daily or hourly if you like), then you can run one of the speech synthesis apps, reading headlines.
  10. If you have a compatible shortwave radio, with upper and lower side band, and a LINUX computer running JS8 Call with appropriate libraries for CAT control? – then look into this and you can set up a headline service over shortwave: https://planetarystatusreport.com/?p=7432

Have fun getting your daily dose of the fucking news.

Create Table Script for RSS Database

SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
SET AUTOCOMMIT = 0;
START TRANSACTION;
SET time_zone = "+00:00";

CREATE TABLE `RSS` (
`ID` bigint(20) NOT NULL,
`SOURCE` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`LINK` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`TITLE` varchar(400) COLLATE utf8_unicode_ci NOT NULL,
`PUBLISHED` datetime NOT NULL,
`ARTICLE` text COLLATE utf8_unicode_ci NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

ALTER TABLE `RSS`
ADD PRIMARY KEY (`ID`),
ADD UNIQUE KEY `unique_link` (`LINK`);

ALTER TABLE `RSS`
MODIFY `ID` bigint(20) NOT NULL AUTO_INCREMENT;
COMMIT;

Python Script for Aggregating RSS Feeds and storing stories locally

from __future__ import print_function

import os
import feedparser
import os.path, time
import json
import math
import time
import urllib.parse as pr
import xml.etree.ElementTree as ET
from bs4 import BeautifulSoup as BS
from requests import get
from os.path import exists
from socket import socket, AF_INET, SOCK_STREAM
from decimal import Decimal
from datetime import datetime, date, timedelta
from anyascii import anyascii
import mysql.connector

usern = "root"
passw = "password"
dbn = "NEWS"
servern = "localhost"
portn = 3306

newsServiceM3 = "ZEROHEDGE"

retHeadlines = 4

newsMode = 3

bigSleep = 90

def GetArt(number):
# Connect with the MySQL Server
cnx = mysql.connector.connect(user=usern, database=dbn, password=passw, host=servern, port=portn)
qry = "select ARTICLE, SOURCE, LINK from RSS where ID = %s" % (number)
cur = cnx.cursor(buffered=True)
cur.execute(qry)
retRes = cur.fetchall()
cnx.close()
return retRes[0]

def GetTopHourly(source):
# Connect with the MySQL Server
cnx = mysql.connector.connect(user=usern, database=dbn, password=passw, host=servern, port=portn)
qry = "select ID, TITLE, PUBLISHED, SOURCE, length(ARTICLE) as LOF from RSS where SOURCE = '%s' order by PUBLISHED desc limit 1" % source
cur = cnx.cursor(buffered=True)
cur.execute(qry)
retRes = cur.fetchall()
cnx.close()
return retRes

def GetTop(source, number):
# Connect with the MySQL Server
cnx = mysql.connector.connect(user=usern, database=dbn, password=passw, host=servern, port=portn)
qry = "select ID, TITLE, PUBLISHED, SOURCE, length(ARTICLE) as LOF from RSS where SOURCE = '%s' order by PUBLISHED desc limit %s" % (source, number)
cur = cnx.cursor(buffered=True)
cur.execute(qry)
retRes = cur.fetchall()
cnx.close()
return retRes

def AlreadySaved(link):
# Connect with the MySQL Server
cnx = mysql.connector.connect(user=usern, database=dbn, password=passw, host=servern, port=portn)
qry = "select ID from RSS where LINK = '" + link + "'"
cur = cnx.cursor(buffered=True)
cur.execute(qry)
cur.fetchall()
rc = cur.rowcount
cnx.close()
if rc > 0:
return True
else:
return False

def SaveRSS(source, title, link, published, article):

tit = title.replace("'", "''")

clean_text = anyascii(article)

art = str(clean_text)

art = art.replace("'", "''")

if len(art) > 5000:
art = art[0:5000]

cnx = mysql.connector.connect(user=usern, database=dbn, password=passw, host=servern, port=portn)

cur = cnx.cursor()

qry = """
INSERT INTO RSS
(SOURCE,
LINK,
TITLE,
PUBLISHED,
ARTICLE)
VALUES
(%s,%s,%s,%s,%s)
"""

val = (source, link, tit, published, art)

cur.execute(qry, val)

cnx.commit()

cnx.close()

def GrabRSS(RssURL, SourceName):

hdrs = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}

NewsFeed = feedparser.parse(RssURL)

for na in NewsFeed.entries:

try:
print(na.title)
print(na.link)
print(na.published)
print(na.published_parsed)
except:
continue

if AlreadySaved(na.link):
continue

print("*************************")

response = get(na.link, None, headers=hdrs)

print(na.keys())

soup = BS(response.content, 'html.parser')

txtChunk = ""

for data in soup.find_all("p"):
txtval = data.get_text()
txtval = txtval.strip()
txtarr = txtval.split()

if len(txtarr) == 1:
continue

if "posted" in txtval and ("hours" in txtval or "days" in txtval) and len(txtarr) == 4:
continue

if txtval == "No Search Results Found":
continue

if txtval == "Terms of Service":
continue

if txtval == "Advertise with us":
continue

if txtval == "Media Inquiries":
continue

txtChunk += " " + txtval + "\n"

tyr = na.published_parsed[0]
tmn = na.published_parsed[1]
tdy = na.published_parsed[2]
thr = na.published_parsed[3]
tmi = na.published_parsed[4]
tsc = na.published_parsed[5]

ptms = "%s-%s-%s %s:%s:%s" % (tyr, tmn, tdy, thr, tmi, tsc)

SaveRSS(SourceName, na.title, na.link, ptms, txtChunk.strip())

print(txtChunk.strip())

def debugHere():
input("Press enter to continue ...")

def clearConsole():
command = 'clear'
if os.name in ('nt', 'dos'): # If Machine is running on Windows, use cls
command = 'cls'
os.system(command)

def CycleFeeds():
infowars = "https://www.infowars.com/rss.xml"
zh = "https://feeds.feedburner.com/zerohedge/feed"
yahoo = "https://news.yahoo.com/rss/"
cnn = "http://rss.cnn.com/rss/cnn_topstories.rss"
bbc = "http://feeds.bbci.co.uk/news/world/us_and_canada/rss.xml"
nyt = "https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml"
onion = "https://www.theonion.com/rss"
bb = "https://babylonbee.com/feed"
print("Grabbing Babylon Bee ...")
GrabRSS(bb, "BB")
print("Grabbing ONION ...")
GrabRSS(onion, "ONION")
print("Grabbing INFOWARS ...")
GrabRSS(infowars, "INFOWARS")
print("Grabbing ZEROHEDGE ...")
GrabRSS(zh, "ZEROHEDGE")
#print("Grabbing YAHOO ...")
#GrabRSS(yahoo, "YAHOO")
print("Grabbing CNN ...")
GrabRSS(cnn, "CNN")
print("Grabbing BBC ...")
GrabRSS(bbc, "BBC")
print("Grabbing NYT ...")
GrabRSS(nyt, "NYT")

# FEEDS:
# 1. INFOWARS: https://www.infowars.com/rss.xml
# 2. ZEROHEDGE: https://feeds.feedburner.com/zerohedge/feed
# 3. YAHOO: https://news.yahoo.com/rss/
# 4. CNN: http://rss.cnn.com/rss/cnn_topstories.rss

time.sleep(1)

CycleFeeds()

Python Speech Synthesis Scripts

A: Windows – SAPI

#this script reads headlines from the RSS news feed
#database.

import win32com.client

speaker = win32com.client.Dispatch("SAPI.SpVoice")

import os
import time
import mysql.connector

usern = "root"
passw = "password"
dbn = "NEWS"
servern = "localhost"
portn = 3306

def TOS(text):
os.system(f"espeak -s 130 -v en+m1 '{text}'")

def GetSql(qry):
# Connect with the MySQL Server
cnx = mysql.connector.connect(user=usern, database=dbn, password=passw, host=servern, port=portn)
cur = cnx.cursor(buffered=True)
cur.execute(qry)
retRes = cur.fetchall()
cnx.close()
return retRes

#+-----------+--------------+------+-----+---------+----------------+
#| Field | Type | Null | Key | Default | Extra |
#+-----------+--------------+------+-----+---------+----------------+
#| ID | bigint(20) | NO | PRI | NULL | auto_increment |
#| SOURCE | varchar(100) | NO | | NULL | |
#| LINK | varchar(255) | NO | UNI | NULL | |
#| TITLE | varchar(400) | NO | | NULL | |
#| PUBLISHED | datetime | NO | | NULL | |
#| ARTICLE | text | NO | | NULL | |
#+-----------+--------------+------+-----+---------+----------------+

qry1 = "select SOURCE, TITLE from RSS where PUBLISHED > curdate()-1 order by PUBLISHED desc;"

res = GetSql(qry1)

for rec in res:
src = rec[0]
tit = rec[1].replace("''", "")
print(src + ": " + tit)

phrase = "From " + src + ", HEAD LINE, " + tit

speaker.Speak(phrase)

time.sleep(2)

B: Linux – ESPEAK

import os
import time
import mysql.connector

usern = "root"
passw = "password"
dbn = "NEWS"
servern = "localhost"
portn = 3306

def TOS(text):
os.system(f"espeak -s 130 -v en+m1 '{text}'")

def GetSql(qry):
# Connect with the MySQL Server
cnx = mysql.connector.connect(user=usern, database=dbn, password=passw, host=servern, port=portn)
cur = cnx.cursor(buffered=True)
cur.execute(qry)
retRes = cur.fetchall()
cnx.close()
return retRes

#+-----------+--------------+------+-----+---------+----------------+
#| Field | Type | Null | Key | Default | Extra |
#+-----------+--------------+------+-----+---------+----------------+
#| ID | bigint(20) | NO | PRI | NULL | auto_increment |
#| SOURCE | varchar(100) | NO | | NULL | |
#| LINK | varchar(255) | NO | UNI | NULL | |
#| TITLE | varchar(400) | NO | | NULL | |
#| PUBLISHED | datetime | NO | | NULL | |
#| ARTICLE | text | NO | | NULL | |
#+-----------+--------------+------+-----+---------+----------------+

qry1 = "select SOURCE, TITLE from RSS where PUBLISHED > curdate()-1 order by PUBLISHED desc;"

res = GetSql(qry1)

for rec in res:
src = rec[0]
tit = rec[1].replace("''", "")
print(src + ": " + tit)
phrase = "From " + src + ", HEAD LINE, " + tit
TOS(phrase)
time.sleep(0.5)

What the “singularity” really is, and why we’re there …

Imagine a simple system of two functions:

f(x) describes HUMAN COMPETENCY over TIME

g(x) describes MACHINE INTELLIGENCE over TIME

In the standard model of the “singularity”, the “machines” achieve hyper-intelligence some time in the 2040’s …

But what people are not considering is human competency, and our ability to evaluate any standard Turing Test is decreasing, and I’d argue at a faster rate …

Which means the intersection and cross over of f(x) and g(x) has already happened … probably.

Welcome to the SINGULARITY …

DIESEL

“As diesel prices increase, the likelihood of government threats made REAL decreases.” – Dr. Freckles

BTW: if you don’t know that TRADE and MILITARIES basically run on diesel, how dumb are you?

PERMAFROST

“Permafrost: a word that is now ironic.” – Dr. Freckles

“Permafrost is soil or underwater sediment which continuously remains below 0 °C for two or more years.”

(seems like the definition has changed since I was a kid)

“Oysters Farting”

Link: https://www.euractiv.com/section/climate-environment/news/oyster-flatulence-worries-climate-scientists/

Link: https://wraltechwire.com/2024/01/26/marshall-brain-how-overheating-in-the-arctic-circle-will-cause-the-collapse-of-civilization/

Go to GOOGLE:

Search for “Permafrost”:

Search for “Permafrost Definition”: