Common Server issues – FAQs and answers from those in the know
Unable to join Google cloud and Firebase DB projects due to Domain Restriction Policy
20 November 2024 @ 9:10 pm
I am trying to join Google Cloud and Firebase projects as an editor. The owner of the projects sent me an invitation to join, but when I attempt to accept the invitation, I encounter the following error:
'''An organization policy restricts that only users from specific domains are allowed. Please contact an organization admin.'''
What I Tried:
Confirmed with the project owner that the invitation was sent to the correct email address.
Retried accepting the invitation multiple times.
Asked the project owner to review the organization's policy settings in Google Cloud.
Searched Firebase and Google Cloud documentation for relevant solutions but could not find a definitive answer
What I Expected:
I expected to successfully accept the invitation and gain access to the projects as an owner.
Mellanox ConnectX-3 very high cpu usage
20 November 2024 @ 8:36 pm
I bought MCX311A-XCAT CX311A ConnectX-3 for my dev server with the idea to buy another one if this works for my pc. Currently my storage server and my pc have 2.5Gbe. They are all plugged into Davuaz Da-K6402W. Dev server is Xeon E-2136 up to 4.5GHz. Storage and dev servers use Rocky Linux 9 (RHEL-based) - kernel 5.14.0. The 10G NIC works, but the cpu usage is about 70% (that's 70% on one core) with iperf3 connection to storage server at about 2.30 Gbits/sec. Storage server on the other hand is Xeon E3-1285L - much older and it iperf3 uses 5 to 10% cpu there.
When copying from a machine with 1G NIC it uses 30% cpu.
That also adds up as power consumption. I measured the power of my storage server with this nic. it goes from 66W to 105W (+40W) because of cpu usage just communicating at 2.5Gbit/s. limiting the cpu to 1GHz reduced the power to 70W (just 4 more watts than idle).
CPU package power
Unexpected Double Network Traffic on Writes in a 2-Node S2D Cluster with Nested Mirror-Accelerated Parity
20 November 2024 @ 8:05 pm
I work at StarWind, and I'm currently exploring the I/O data path in Storage Spaces Direct for my blog posts.
I’ve encountered an odd behavior with doubled network traffic on write operations in a 2-node S2D cluster configured with Nested Mirror-Accelerated Parity.
During write tests, something unexpected happened: while writing at 1 GiB/s, network traffic to the partner node was constantly at 2 GiB/s instead of the expected 1 GiB/s.
Could this be due to S2D configuring the mirror storage tier with four data copies (NumberOfDataCopies = 4), where S2D writes two data copies on the local node and another two on the partner node?
Setup details:
The environment is a 2-node S2D cluster running Windows Server 2022 Datacenter 21H2 (OS build 20348.2527). I followed Microsoft’s resiliency options for nested configurations as outlined here:
Correlate CloudTrail EventName with IAM Permission name
20 November 2024 @ 7:50 pm
I am trying to map CloudTrail EventNames, like "DescribeRegions" to an IAM Permissions, such as ec2:describeregions. With the event source field, sometimes I can just split the hostname off and append it to the eventname. For example:
EventSource: 'ec2.amazonaws.com', EventName: "DescribeRegions"
I could deduce the IAM permission is ec2:DescribeRegions
For other services/eventsources, that trick won't work. Is there a mapping anywhere, or an API, or just data I'm missing in the CloudTrail that gives me this data?
FWIW, I am only interested for now in "Success", not "Error/Deny" event types.
How to fixed th IP on GKE cluster
20 November 2024 @ 7:12 pm
I am currently deploying a service in GKE, and I need to connect to a third-party API. This third-party API has mandatory IP filtering. I am rookie on GCloud and I do not know what exactly should do in this case.
I only created a GKE cluster and reserved an external IP address that I used within the LoadBalancer in the service, but it turns out that the IP using is the one from the Node where the pod is set.
Any recommendations?
Thanks.
Too Many Redirects Issue with Nginx Reverse Proxy and WordPress VPS Setup
20 November 2024 @ 5:36 pm
I am running multiple WordPress sites on separate VPS instances managed by Proxmox. I'm using an Nginx reverse proxy server to manage HTTPS and forward traffic to each VPS. This setup is intended to be replicated for other sites in the future, which is why I'm using a reverse proxy and a dedicated VPS per site.
Currently, when I access my site via the browser, I get a "Too Many Redirects" error. The site works perfectly when accessed locally on the VPS over HTTP or using curl from the reverse proxy.
The firewall in my network is a Fortinet device, configured to forward all traffic on ports 80 and 443 to the reverse proxy vm. The reverse proxy is responsible for routing the traffic to the appropriate VPS. However, when I access my site via the browser, I get a "Too Many Redirects" error. The site works perfectly when accessed locally on the VPS over HTTP or using curl from the reverse proxy or just typing vps ip on local network.
Here's my cur
VLAN Configuration without inter-vlan routing but with Internet access
20 November 2024 @ 5:26 pm
I’m trying to set up multiple VLANs on my Cisco Catalyst 3850 switch and connect them to the internet using a TP-Link AX20/Asus RT-N12 router. I need this setup to have multiple isolated networks with different Ip ranges in a way that devices in each vlan cannot communicate with other vlans but they all have access to Internet. However, the routers don't seem to support VLAN tagging (802.1Q), and I’m wondering if it’s possible to achieve this setup without needing a more advanced router.
My Current Setup:
Switch: Cisco Catalyst 3850 24T
VLANs created:
VLAN 10: 192.168.10.0/24
VLAN 20: 192.168.20.0/24
VLAN 30: 192.168.30.0/24
Each VLAN has an SVI configured with its respective gateway IP:
VLAN 10 SVI: 192.168.10.1
VLAN 20 SVI: 192.168.20.1
VLAN 30 SVI: 192.168.30.1
The routers do not support VLAN tagging or subinterfaces.
Its LAN IP is 192.168.1.1 (assigned manually) and is connected to a trunk/acces
DOCKER - trying to run mysql and apache both in foreground
20 November 2024 @ 5:20 pm
FROM ubuntu:latest
Run <<EOF
apt update
apt install -y apache2 mysql-server php libapache2-mod-php php-mysql php-curl php-json php-cgi php-cli php-zip php-xml php-mbstring
apt install wget
EOF
ENV WP_DB_NAME="Btsample"
ENV WP_DB_USER="Bt"
ENV WP_DB_PASS="**********"
CMD service mysql start && apachectl -D FOREGROUND
RUN mysql <<EOF
CREATE DATABASE $WP_DB_NAME;
CREATE USER '$WP_DB_USER'@'localhost' IDENTIFIED BY '$WP_DB_PASS';
GRANT ALL PRIVILEGES ON $WP_DB_NAME.* TO '$WP_DB_USER'@'localhost';
FLUSH PRIVILEGES;
EXIT;
EOF
RUN wget https://wordpress.org/latest.tar.gz
RUN tar -xvzf latest.tar.gz
RUN cp -R wordpress/* /var/www/html/
RUN tee /etc/apache2/sites-available/wordpress.conf > /dev/null <<EOF
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
ServerName wordpress.com
ErrorL
what if a host is down
20 November 2024 @ 5:13 pm
#!/bin/bash
SERVER_LIST=/path/to/servers.txt
while read REMOTE_SERVER
do
ssh $REMOTE_SERVER "do_something_cool"
done < $SERVER_LIST
what happens when one of the servers is down in this code?
i have been using hss to start apps in 25 remote computers, but i just ran into one being down and then hss looking for that host and it never moves on. so it's not optimal if one of my computers is offline.
will the above code move one to the next host in the server list if it comes across one that's offline? or is there an adjustment to make to this code to achieve that?
Is it possible to use nginx reverse proxy between user (site visitor) and cloudflare cdn?
20 November 2024 @ 4:52 pm
I have some websites, and i use cloudflare cdn with proxy setting enabled. So now the chain is:
website visitor -> cloudflare -> my webserver
I want to add some tricky thing - to add one more vps server in the chain, that will proxy requests only on 80 and 443 ports from website visitor to cloudflare, and do some another things for some other ports (have it's own mail server / surveillance server, etc). So what i want to have finally:
website visitor -> vps with nginx as reverse proxy -> cloudflare -> my webserver
So, for 80/443 port requests vps with nginx proxy to cloudflare, for other ports - do not proxy but works on its own.
I know how to use nginx reverse proxy to proxy requests on some ports to other external ip address, editing config file like etc/nginx/conf.d/1.2.3.4.conf, where 1.2.3.4 is the ip of nginx server, like this:
server {
listen 80;
server_name 1.2.3.4;
location / {
proxy_pass http://5.6.7.8; # ext