If you can update your connector to a version, which supports the new authentication plugin of MySQL 8, then do that. If that is not an option for some reason, change the default authentication method of your database user to native.
In my case it was user error. I was using the root
user with an invalid password. I am not sure why I didn't get an auth error but instead received this cryptic message.
I have added in Application Class
@Bean
@ConfigurationProperties("app.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
application.properties I have added
app.datasource.url=jdbc:mysql://localhost/test
app.datasource.username=dbuser
app.datasource.password=dbpass
app.datasource.pool-size=30
More details Configure a Custom DataSource
Use the INTERVAL
type to it. E.g:
--yesterday
SELECT NOW() - INTERVAL '1 DAY';
--Unrelated to the question, but PostgreSQL also supports some shortcuts:
SELECT 'yesterday'::TIMESTAMP, 'tomorrow'::TIMESTAMP, 'allballs'::TIME;
Then you can do the following on your query:
SELECT
org_id,
count(accounts) AS COUNT,
((date_at) - INTERVAL '1 DAY') AS dateat
FROM
sourcetable
WHERE
date_at <= now() - INTERVAL '130 DAYS'
GROUP BY
org_id,
dateat;
You can append multiple operands. E.g.: how to get last day of current month?
SELECT date_trunc('MONTH', CURRENT_DATE) + INTERVAL '1 MONTH - 1 DAY';
You can also create an interval using make_interval
function, useful when you need to create it at runtime (not using literals):
SELECT make_interval(days => 10 + 2);
SELECT make_interval(days => 1, hours => 2);
SELECT make_interval(0, 1, 0, 5, 0, 0, 0.0);
If you need in different Layer :
Create a Static Class and expose all config properties on that layer as below :
using Microsoft.Extensions.Configuration;_x000D_
using System.IO;_x000D_
_x000D_
namespace Core.DAL_x000D_
{_x000D_
public static class ConfigSettings_x000D_
{_x000D_
public static string conStr1 { get ; }_x000D_
static ConfigSettings()_x000D_
{_x000D_
var configurationBuilder = new ConfigurationBuilder();_x000D_
string path = Path.Combine(Directory.GetCurrentDirectory(), "appsettings.json");_x000D_
configurationBuilder.AddJsonFile(path, false);_x000D_
conStr1 = configurationBuilder.Build().GetSection("ConnectionStrings:ConStr1").Value;_x000D_
}_x000D_
}_x000D_
}
_x000D_
Yet another reason may be that your entity class has several properties named somhow /.*id/i
- so ending with ID case insensitive AND elementary type AND there is no [Key]
attribute.
EF will namely try to figure out the PK by itself by looking for elementary typed properties ending in ID.
See my case:
public class MyTest, IMustHaveTenant
{
public long Id { get; set; }
public int TenantId { get; set; }
[MaxLength(32)]
public virtual string Signum{ get; set; }
public virtual string ID { get; set; }
public virtual string ID_Other { get; set; }
}
don't ask - lecacy code. The Id
was even inherited, so I could not use [Key]
(just simplifying the code here)
But here EF is totally confused.
What helped was using modelbuilder this in DBContext class.
modelBuilder.Entity<MyTest>(f =>
{
f.HasKey(e => e.Id);
f.HasIndex(e => new { e.TenantId });
f.HasIndex(e => new { e.TenantId, e.ID_Other });
});
the index on PK is implicit.
Deleting contents of folder C:\Users\User_Name\AppData\Roaming\pgAdmin\sessions helped me, I was able to start and load the pgAdmin server
spring.jpa.hibernate.ddl-auto = update
change update
to create
, and run it
after run safely again change create
to update
so again all tables will not create and you can use your previous data
As mentioned by top scoring answer by Win you may need to install Microsoft.EntityFrameworkCore.SqlServer NuGet Package, but please note that this question is using asp.net core mvc. In the latest ASP.NET Core 2.1, MS have included what is called a metapackage called Microsoft.AspNetCore.App
https://docs.microsoft.com/en-us/aspnet/core/fundamentals/metapackage-app?view=aspnetcore-2.2
You can see the reference to it if you right-click the ASP.NET Core MVC project in the solution explorer and select Edit Project File
You should see this metapackage if ASP.NET core webapps the using statement
<PackageReference Include="Microsoft.AspNetCore.App" />
Microsoft.EntityFrameworkCore.SqlServer is included in this metapackage. So in your Startup.cs you may only need to add:
using Microsoft.EntityFrameworkCore;
The same thing happened to me as I had changed something in the /etc/hosts
file. After changing it back to 127.0.0.1 localhost
it worked for me.
Strangely enough, the solution ended up being to change
volumes:
- ./postgres-data:/var/lib/postgresql
to
volumes:
- ./postgres-data:/var/lib/postgresql/data
I had problems trying to inject from my Program.cs file, by using the CreateDefaultBuilder like below, but ended up solving it by skipping the default binder. (see below).
var host = Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.ConfigureServices(servicesCollection => { servicesCollection.AddSingleton<ITest>(x => new Test()); });
webBuilder.UseStartup<Startup>();
}).Build();
It seems like the Build should have been done inside of ConfigureWebHostDefaults to get it work, since otherwise the configuration will be skipped, but correct me if I am wrong.
This approach worked fine:
var host = new WebHostBuilder()
.ConfigureServices(servicesCollection =>
{
var serviceProvider = servicesCollection.BuildServiceProvider();
IConfiguration configuration = (IConfiguration)serviceProvider.GetService(typeof(IConfiguration));
servicesCollection.AddSingleton<ISendEmailHandler>(new SendEmailHandler(configuration));
})
.UseStartup<Startup>()
.Build();
This also shows how to inject an already predefined dependency in .net core (IConfiguration) from
This should work:
select * from mytable where 'Journal'=ANY(pub_types);
i.e. the syntax is <value> = ANY ( <array> )
. Also notice that string literals in postresql are written with single quotes.
I was getting the same error. I was using Intellij IDEA and I wanted to run Spring boot application. So, solution from my side is as follow.
Go to Run menu -> Run configuration -> Click on Add button from the left panel and select maven -> In parameters add this text -> spring-boot:run
Now press Ok and Run.
Remember to check firewall settings as well. after checking and double-checking my pg_hba.conf
and postgres.conf
files I finally found out that my firewall was overriding everything and therefore blocking connections
Override constructor of DbContext Try this :-
public DataContext(DbContextOptions<DataContext> option):base(option) {}
You can of course format the result of current_timestamp()
.
Please have a look at the various formatting functions in the official documentation.
I strongly recommend the method extract the column 'Pollutants':
df_pollutants = pd.DataFrame(df['Pollutants'].values.tolist(), index=df.index)
it's much faster than
df_pollutants = df['Pollutants'].apply(pd.Series)
when the size of df is giant.
I tried to connect from localhost (mac) to a postgres container. I changed the port in the docker-compose file from 5432 to 3306 and started the container. No idea why I did it :|
Then I tried to connect to postgres via PSequel and adminer and the connection could not be established.
After switching back to port 5432 all works fine.
db:
image: postgres
ports:
- 5432:5432
restart: always
volumes:
- "db_sql:/var/lib/mysql"
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: postgres_db
This was my experience I wanted to share. Perhaps someone can make use of it.
I was getting this error even when all the relevant dependencies were in place because I hadn't created the schema in MySQL.
I thought it would be created automatically but it wasn't. Although the table itself will be created, you have to create the schema.
run postgres -D /usr/local/var/postgres
and you should see something like:
FATAL: lock file "postmaster.pid" already exists
HINT: Is another postmaster (PID 379) running in data directory "/usr/local/var/postgres"?
Then run kill -9 PID
in HINT
And you should be good to go.
In my case, the dump file I restored had these commands.
CREATE SCHEMA employees;
SET search_path = employees, pg_catalog;
I've commented those and restored again. The issue got resolved
pg_ctl
is a command line (Windows) program not a SQL statement. You need to do that from a cmd.exe
. Or use net start postgresql-9.5
If you have installed Postgres through the installer, you should start the Windows service instead of running pg_ctl
manually, e.g. using:
net start postgresql-9.5
Note that the name of the service might be different in your installation. Another option is to start the service through the Windows control panel
I have used the pgAdmin II tool to create a database called company
Which means that Postgres is already running, so I don't understand why you think you need to do that again. Especially because the installer typically sets the service to start automatically when Windows is started.
The reason you are not seeing any result is that psql
requires every SQL command to be terminated with ;
in your case it's simply waiting for you to finish the statement.
See here for more details: In psql, why do some commands have no effect?
Postgres hasn't implemented an equivalent to INSERT OR REPLACE
. From the ON CONFLICT
docs (emphasis mine):
It can be either DO NOTHING, or a DO UPDATE clause specifying the exact details of the UPDATE action to be performed in case of a conflict.
Though it doesn't give you shorthand for replacement, ON CONFLICT DO UPDATE
applies more generally, since it lets you set new values based on preexisting data. For example:
INSERT INTO users (id, level)
VALUES (1, 0)
ON CONFLICT (id) DO UPDATE
SET level = users.level + 1;
I got the same error and changing the following
SessionFactory sessionFactory =
new Configuration().configure().buildSessionFactory();
to this
SessionFactory sessionFactory =
new Configuration().configure("hibernate.cfg.xml").buildSessionFactory();
worked for me.
Use @profile("whatever-name-profile-to-activate-if-needed")
on your security configuration class that extends WebSecurityConfigurerAdapter
security.ignored=/**
security.basic.enable: false
NB. I need to debug to know why why exclude auto configuration did not work for me. But the profile is sot so bad as you can still re-activate it via configuration properties if needed
Try
select date_part('year', your_column) from your_table;
or
select extract(year from your_column) from your_table;
First of all you should not merge the Fluent Api with the data annotation so I would suggest you to use one of the below:
make sure you have correclty set the keys
modelBuilder.Entity<Foo>()
.HasKey(p => new { p.Name, p.Id });
modelBuilder.Entity<Foo>().Property(p => p.Id).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
OR you can achieve it using data annotation as well
public class Foo
{
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
[Key, Column(Order = 0)]
public int Id { get; set; }
[Key, Column(Order = 1)]
public string Name{ get; set; }
}
Open the file .bash_profile
in your Home folder. It is a hidden file.
Add this path below to the end export PATH line in you .bash_profile
file
:/Applications/Postgres.app/Contents/Versions/latest/bin
The symbol :
separates the paths.
If the file contains:
export PATH=/usr/bin:/bin:/usr/sbin:/sbin
it will become:
export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/Applications/Postgres.app/Contents/Versions/latest/bin
In Terminal, paste the following: defaults write com.apple.finder AppleShowAllFiles YES
For all docker users, just run docker-php-ext-install mysqli
from inside your php container.
Update: More information on https://hub.docker.com/_/php in the section "How to install more PHP extensions".
What I did is first check what are the running processes by
SELECT * FROM pg_stat_activity WHERE state = 'active';
Find the process you want to kill, then type:
SELECT pg_cancel_backend(<pid of the process>)
This basically "starts" a request to terminate gracefully, which may be satisfied after some time, though the query comes back immediately.
If the process cannot be killed, try:
SELECT pg_terminate_backend(<pid of the process>)
CREATE ROLE blog WITH
LOGIN
SUPERUSER
INHERIT
CREATEDB
CREATEROLE
REPLICATION;
COMMENT ON ROLE blog IS 'Test';
On Windows :
1-Open Run Window by Winkey + R
2-Type services.msc
3-Search Postgres service based on version installed.
4-Click stop, start or restart the service option.
On Linux :
sudo systemctl restart postgresql
also instead of "restart" you can replace : status, stop or status.
(Neither IN
nor ANY
is an "operator". A "construct" or "syntax element".)
Logically, quoting the manual:
IN
is equivalent to= ANY
.
But there are two syntax variants of IN
and two variants of ANY
. Details:
IN
taking a set is equivalent to = ANY
taking a set, as demonstrated here:
But the second variant of each is not equivalent to the other. The second variant of the ANY
construct takes an array (must be an actual array type), while the second variant of IN
takes a comma-separated list of values. This leads to different restrictions in passing values and can also lead to different query plans in special cases:
ANY
is more versatileThe ANY
construct is far more versatile, as it can be combined with various operators, not just =
. Example:
SELECT 'foo' LIKE ANY('{FOO,bar,%oo%}');
For a big number of values, providing a set scales better for each:
Related:
"Find rows where id
is in the given array":
SELECT * FROM tbl WHERE id = ANY (ARRAY[1, 2]);
Inversion: "Find rows where id
is not in the array":
SELECT * FROM tbl WHERE id <> ALL (ARRAY[1, 2]);
SELECT * FROM tbl WHERE id <> ALL ('{1, 2}'); -- equivalent array literal
SELECT * FROM tbl WHERE NOT (id = ANY ('{1, 2}'));
All three equivalent. The first with array constructor, the other two with array literal. The data type can be derived from context unambiguously. Else, an explicit cast may be required, like '{1,2}'::int[]
.
Rows with id IS NULL
do not pass either of these expressions. To include NULL
values additionally:
SELECT * FROM tbl WHERE (id = ANY ('{1, 2}')) IS NOT TRUE;
There are 3 states for boolean in PG: true, false and unknown (null). Explained here: Postgres boolean datatype
Therefore you need only query for NOT TRUE:
SELECT * from table_name WHERE boolean_column IS NOT TRUE;
From stack trace:
HikariPool: Timeout failure pool HikariPool-0 stats (total=20, active=20, idle=0, waiting=0) Means pool reached maximum connections limit set in configuration.
The next line: HikariPool-0 - Connection is not available, request timed out after 30000ms. Means pool waited 30000ms for free connection but your application not returned any connection meanwhile.
Mostly it is connection leak (connection is not closed after borrowing from pool), set leakDetectionThreshold to the maximum value that you expect SQL query would take to execute.
otherwise, your maximum connections 'at a time' requirement is higher than 20 !
Make sure the settings are applied correctly in the config file.
vim /etc/postgresql/x.x/main/postgresql.conf
Try the following to see the logs and find your problem.
tail /var/log/postgresql/postgresql-x.x-main.log
Solved it! Although I don't know what happened, but I just deleted all the stuff and reinstalled it. This is the command I used to delete it sudo apt-get --purge remove postgresql\*
and dpkg -l | grep postgres
. The latter one is to find all the packets in case it is not clean.
The solution posted here does not work for me. Therefore, I am posting this answer to help someone facing similar issue.
OS: Ubuntu 18
PostgreSQL: 9.5 (Hosted on Ubuntu)
Docker: Server Application (which connects to PostgreSQL)
I am using docker-compose.yml to build application.
STEP 1: Please add host.docker.internal:<docker0 IP>
version: '3'
services:
bank-server:
...
depends_on:
....
restart: on-failure
ports:
- 9090:9090
extra_hosts:
- "host.docker.internal:172.17.0.1"
To find IP of docker i.e. 172.17.0.1 (in my case)
you can use:
$> ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
OR
$> ip a
1: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
STEP 2: In postgresql.conf, change listen_addresses to listen_addresses = '*'
STEP 3: In pg_hba.conf, add this line
host all all 0.0.0.0/0 md5
STEP 4: Now restart postgresql service using, sudo service postgresql restart
STEP 5: Please use host.docker.internal
hostname to connect database from Server Application.
Ex: jdbc:postgresql://host.docker.internal:5432/bankDB
Enjoy!!
The default postgresql.conf
file lives within the PGDATA
dir (/var/lib/postgresql/data
), which makes things more complicated especially when running postgres container for the first time, since the docker-entrypoint.sh
wrapper invokes the initdb
step for PGDATA
dir initialization.
To customize PostgreSQL configuration in Docker consistently, I suggest using config_file
postgres option together with Docker volumes like this:
docker run -d \
-v $CUSTOM_CONFIG:/etc/postgresql.conf \
-v $CUSTOM_DATADIR:/var/lib/postgresql/data \
-e POSTGRES_USER=postgres \
-p 5432:5432 \
--name postgres \
postgres:9.6 postgres -c config_file=/etc/postgresql.conf
docker rm
)docker run -d \
-v $CUSTOM_CONFIG:/etc/postgresql.conf \
-e POSTGRES_USER=postgres \
--name postgres \
postgres:9.6 postgres -c config_file=/etc/postgresql.conf
-d
(detach option) from docker run
command to see the server logs directly.Connect to the postgres server with psql client and query the configuration:
docker run -it --rm --link postgres:postgres postgres:9.6 sh -c 'exec psql -h $POSTGRES_PORT_5432_TCP_ADDR -p $POSTGRES_PORT_5432_TCP_PORT -U postgres'
psql (9.6.0)
Type "help" for help.
postgres=# SHOW all;
Just increasing max_connections
is bad idea. You need to increase shared_buffers
and kernel.shmmax
as well.
Considerations
max_connections
determines the maximum number of concurrent connections to the database server. The default is typically 100 connections.
Before increasing your connection count you might need to scale up your deployment. But before that, you should consider whether you really need an increased connection limit.
Each PostgreSQL connection consumes RAM for managing the connection or the client using it. The more connections you have, the more RAM you will be using that could instead be used to run the database.
A well-written app typically doesn't need a large number of connections. If you have an app that does need a large number of connections then consider using a tool such as pg_bouncer which can pool connections for you. As each connection consumes RAM, you should be looking to minimize their use.
How to increase max connections
1. Increase max_connection
and shared_buffers
in /var/lib/pgsql/{version_number}/data/postgresql.conf
change
max_connections = 100
shared_buffers = 24MB
to
max_connections = 300
shared_buffers = 80MB
The shared_buffers
configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data.
2. Change kernel.shmmax
You would need to increase kernel max segment size to be slightly larger
than the shared_buffers
.
In file /etc/sysctl.conf
set the parameter as shown below. It will take effect when postgresql
reboots (The following line makes the kernel max to 96Mb
)
kernel.shmmax=100663296
References
Here is my version to find mount points of a docker compose. In use this to backup the volumes.
# for Id in $(docker-compose -f ~/ida/ida.yml ps -q); do docker inspect -f '{{ (index .Mounts 0).Source }}' $Id; done
/data/volumes/ida_odoo-db-data/_data
/data/volumes/ida_odoo-web-data/_data
This is a combination of previous solutions.
It seems your @ComponentScan
annotation is not set properly.
Try :
@ComponentScan(basePackages = {"com.pharmacy"})
Actually you do not need the component scan if you have your main class at the top of the structure, for example directly under com.pharmacy
package.
Also, you don't need both
@SpringBootApplication
@EnableAutoConfiguration
The @SpringBootApplication
annotation includes @EnableAutoConfiguration
by default.
I customized Tomcat DataSource in Spring-Boot 2.
Dependency versions:
May be it will be useful for somebody.
application.yml
spring:
datasource:
driver-class-name: org.postgresql.Driver
type: org.apache.tomcat.jdbc.pool.DataSource
url: jdbc:postgresql://${spring.datasource.database.host}:${spring.datasource.database.port}/${spring.datasource.database.name}
database:
host: localhost
port: 5432
name: rostelecom
username: postgres
password: postgres
tomcat:
validation-query: SELECT 1
validation-interval: 30000
test-on-borrow: true
remove-abandoned: true
remove-abandoned-timeout: 480
test-while-idle: true
time-between-eviction-runs-millis: 60000
log-validation-errors: true
log-abandoned: true
Java
@Bean
@Primary
@ConfigurationProperties("spring.datasource.tomcat")
public PoolConfiguration postgresDataSourceProperties() {
return new PoolProperties();
}
@Bean(name = "primaryDataSource")
@Primary
@Qualifier("primaryDataSource")
@ConfigurationProperties(prefix = "spring.datasource")
public DataSource primaryDataSource() {
PoolConfiguration properties = postgresDataSourceProperties();
return new DataSource(properties);
}
The main reason why it had been done is several DataSources in application and one of them it is necessary to mark as a @Primary.
From the fine manual.
You must own the table to use ALTER TABLE.
Or be a database superuser.
ERROR: must be owner of relation contact
PostgreSQL error messages are usually spot on. This one is spot on.
First, Lateral and Cross Apply is same thing. Therefore you may also read about Cross Apply. Since it was implemented in SQL Server for ages, you will find more information about it then Lateral.
Second, according to my understanding, there is nothing you can not do using subquery instead of using lateral. But:
Consider following query.
Select A.*
, (Select B.Column1 from B where B.Fk1 = A.PK and Limit 1)
, (Select B.Column2 from B where B.Fk1 = A.PK and Limit 1)
FROM A
You can use lateral in this condition.
Select A.*
, x.Column1
, x.Column2
FROM A LEFT JOIN LATERAL (
Select B.Column1,B.Column2,B.Fk1 from B Limit 1
) x ON X.Fk1 = A.PK
In this query you can not use normal join, due to limit clause. Lateral or Cross Apply can be used when there is not simple join condition.
There are more usages for lateral or cross apply but this is most common one I found.
Here is a simple example:
EXEC sp_executesql @sql, N'@p1 INT, @p2 INT, @p3 INT', @p1, @p2, @p3;
Your call will be something like this
EXEC sp_executesql @statement, N'@LabID int, @BeginDate date, @EndDate date, @RequestTypeID varchar', @LabID, @BeginDate, @EndDate, @RequestTypeID
I just run this command as a root from terminal and problem is solved,
sudo apt-get install -y postgis postgresql-9.3-postgis-2.1
pip install psycopg2
or
sudo apt-get install libpq-dev python-dev
pip install psycopg2
The basic idea will be using a nested query with count aggregation:
select * from yourTable ou
where (select count(*) from yourTable inr
where inr.sid = ou.sid) > 1
You can adjust the where clause in the inner query to narrow the search.
There is another good solution for that mentioned in the comments, (but not everyone reads them):
select Column1, Column2, count(*)
from yourTable
group by Column1, Column2
HAVING count(*) > 1
Or shorter:
SELECT (yourTable.*)::text, count(*)
FROM yourTable
GROUP BY yourTable.*
HAVING count(*) > 1
This kind of code perhaps should work for You
SELECT
*,
CASE
WHEN (pvc IS NULL OR pvc = '') AND (datepose < 1980) THEN '01'
WHEN (pvc IS NULL OR pvc = '') AND (datepose >= 1980) THEN '02'
WHEN (pvc IS NULL OR pvc = '') AND (datepose IS NULL OR datepose = 0) THEN '03'
ELSE '00'
END AS modifiedpvc
FROM my_table;
gid | datepose | pvc | modifiedpvc
-----+----------+-----+-------------
1 | 1961 | 01 | 00
2 | 1949 | | 01
3 | 1990 | 02 | 00
1 | 1981 | | 02
1 | | 03 | 00
1 | | | 03
(6 rows)
You can use COALESCE
in conjunction with NULLIF
for a short, efficient solution:
COALESCE( NULLIF(yourField,'') , '0' )
The NULLIF
function will return null if yourField
is equal to the second value (''
in the example), making the COALESCE
function fully working on all cases:
QUERY | RESULT
---------------------------------------------------------------------------------
SELECT COALESCE(NULLIF(null ,''),'0') | '0'
SELECT COALESCE(NULLIF('' ,''),'0') | '0'
SELECT COALESCE(NULLIF('foo' ,''),'0') | 'foo'
Oh, I just found that command on PostgreSQL forum:
SELECT * FROM pg_stat_activity;
There is no way in PostgreSQL to deconstruct a scalar JSON object. Thus, as you point out,
select length(to_json('Some "text"'::TEXT) ::TEXT);
is 15,
The trick is to convert the JSON into an array of one JSON element, then extract that element using ->>
.
select length( array_to_json(array[to_json('Some "text"'::TEXT)])->>0 );
will return 11.
Use this to list Grantee too and remove (PG_monitor and Public) for Postgres PaaS Azure.
SELECT grantee,table_catalog, table_schema, table_name, privilege_type
FROM information_schema.table_privileges
WHERE grantee not in ('pg_monitor','PUBLIC');
update the 'name' attribute:
UPDATE test SET data=data||'{"name":"my-other-name"}' WHERE id = 1;
and if you wanted to remove for example the 'name' and 'tags' attributes:
UPDATE test SET data=data-'{"name","tags"}'::text[] WHERE id = 1;
I also faced a similar issue. But, it was due to the invalid password provided. Also, I would like to say your code seems to be old-style code using spring. You already mentioned that you are using spring boot, which means most of the things will be auto configured for you. hibernate dialect will be auto selected based on the DB driver available on the classpath along with valid credentials which can be used to test the connection properly. If there is any issue with the connection you will again face the same error. only 3 properties needed in application.properties
# Replace with your connection string
spring.datasource.url=jdbc:mysql://localhost:3306/pdb1
# Replace with your credentials
spring.datasource.username=root
spring.datasource.password=
This modification of a_horse_with_no_name's answer will give you the blocking queries in addition to just the blocked sessions:
SELECT
activity.pid,
activity.usename,
activity.query,
blocking.pid AS blocking_id,
blocking.query AS blocking_query
FROM pg_stat_activity AS activity
JOIN pg_stat_activity AS blocking ON blocking.pid = ANY(pg_blocking_pids(activity.pid));
As easy as
SELECT lpad(42::text, 4, '0')
References:
sqlfiddle: http://sqlfiddle.com/#!15/d41d8/3665
Another way to do that is comment everything in Initial Class,between Up and Down Methods.Then run update-database, after running seed method was successful so run update-database again.It maybe helpful to some friends.
This worked for me:
$ sudo apt-get install php-gd php-mysql
Your calculation is correct for DATE
types, but if your values are timestamps, you should probably use EXTRACT
(or DATE_PART) to be sure to get only the difference in full days;
EXTRACT(DAY FROM MAX(joindate)-MIN(joindate)) AS DateDifference
An SQLfiddle to test with. Note the timestamp difference being 1 second less than 2 full days.
You haven't created a user db
. If its just a fresh install, the default user is postgres
and the password should be blank. After you access it, you can create the users you need.
Maybe this will help someone. I was using a nested app. project.appname and I actually had project and project.appname in INSTALLED_APPS. Removing project from INSTALLED_APPS allowed the changes to be detected.
This is the command worked for me.
cat your_dump.sql | sudo docker exec -i {docker-postgres-container} psql -U {user} -d {database_name}
for example
cat table_backup.sql | docker exec -i 03b366004090 psql -U postgres -d postgres
Reference: Solution given by GMartinez-Sisti in this discussion. https://gist.github.com/gilyes/525cc0f471aafae18c3857c27519fc4b
The read_sql
docs say this params
argument can be a list, tuple or dict (see docs).
To pass the values in the sql query, there are different syntaxes possible: ?
, :1
, :name
, %s
, %(name)s
(see PEP249).
But not all of these possibilities are supported by all database drivers, which syntax is supported depends on the driver you are using (psycopg2
in your case I suppose).
In your second case, when using a dict, you are using 'named arguments', and according to the psycopg2
documentation, they support the %(name)s
style (and so not the :name
I suppose), see http://initd.org/psycopg/docs/usage.html#query-parameters.
So using that style should work:
df = psql.read_sql(('select "Timestamp","Value" from "MyTable" '
'where "Timestamp" BETWEEN %(dstart)s AND %(dfinish)s'),
db,params={"dstart":datetime(2014,6,24,16,0),"dfinish":datetime(2014,6,24,17,0)},
index_col=['Timestamp'])
My solution was to do a combination of these two resources:
https://gist.github.com/tamoyal/2ea1fcdf99c819b4e07d
and
http://www.gab.lc/articles/migration_postgresql_9-3_to_9-4
The second one helped more then the first one. Also to not, don't follow the steps as is as some are not necessary. Also, if you are not being able to backup the data via postgres console, you can use alternative approach, and backup it with pgAdmin 3 or some other program, like I did in my case.
Also, the link: https://help.ubuntu.com/stable/serverguide/postgresql.html Helped to set the encrypted password and set md5 for authenticating the postgres user.
After all is done, to check the postgres server
version run in terminal:
sudo -u postgres psql postgres
After entering the password run in postgres terminal:
SHOW SERVER_VERSION;
It will output something like:
server_version
----------------
9.4.5
For setting and starting postgres I have used command:
> sudo bash # root
> su postgres # postgres
> /etc/init.d/postgresql start
> /etc/init.d/postgresql stop
And then for restoring database from a file:
> psql -f /home/ubuntu_username/Backup_93.sql postgres
Or if doesn't work try with this one:
> pg_restore --verbose --clean --no-acl --no-owner -h localhost -U postgres -d name_of_database ~/your_file.dump
And if you are using Rails do a bundle exec rake db:migrate
after pulling the code :)
https://www.postgresql.org/docs/8.4/functions-formatting.html
SELECT to_char(date_field, 'DD/MM/YYYY')
FROM table
Right-Click on your project -> Properties -> Deployment Assembly.
On the Left-hand panel Click 'Add' and add the 'Project and External Dependencies'.
'Project and External Dependencies' will have all the spring related jars deployed along with your application
SELECT json_agg(t) FROM t
for a JSON array of objects, and
SELECT
json_build_object(
'a', json_agg(t.a),
'b', json_agg(t.b)
)
FROM t
for a JSON object of arrays.
This section describes how to generate a JSON array of objects, with each row being converted to a single object. The result looks like this:
[{"a":1,"b":"value1"},{"a":2,"b":"value2"},{"a":3,"b":"value3"}]
The json_agg
function produces this result out of the box. It automatically figures out how to convert its input into JSON and aggregates it into an array.
SELECT json_agg(t) FROM t
There is no jsonb
(introduced in 9.4) version of json_agg
. You can either aggregate the rows into an array and then convert them:
SELECT to_jsonb(array_agg(t)) FROM t
or combine json_agg
with a cast:
SELECT json_agg(t)::jsonb FROM t
My testing suggests that aggregating them into an array first is a little faster. I suspect that this is because the cast has to parse the entire JSON result.
9.2 does not have the json_agg
or to_json
functions, so you need to use the older array_to_json
:
SELECT array_to_json(array_agg(t)) FROM t
You can optionally include a row_to_json
call in the query:
SELECT array_to_json(array_agg(row_to_json(t))) FROM t
This converts each row to a JSON object, aggregates the JSON objects as an array, and then converts the array to a JSON array.
I wasn't able to discern any significant performance difference between the two.
This section describes how to generate a JSON object, with each key being a column in the table and each value being an array of the values of the column. It's the result that looks like this:
{"a":[1,2,3], "b":["value1","value2","value3"]}
We can leverage the json_build_object
function:
SELECT
json_build_object(
'a', json_agg(t.a),
'b', json_agg(t.b)
)
FROM t
You can also aggregate the columns, creating a single row, and then convert that into an object:
SELECT to_json(r)
FROM (
SELECT
json_agg(t.a) AS a,
json_agg(t.b) AS b
FROM t
) r
Note that aliasing the arrays is absolutely required to ensure that the object has the desired names.
Which one is clearer is a matter of opinion. If using the json_build_object
function, I highly recommend putting one key/value pair on a line to improve readability.
You could also use array_agg
in place of json_agg
, but my testing indicates that json_agg
is slightly faster.
There is no jsonb
version of the json_build_object
function. You can aggregate into a single row and convert:
SELECT to_jsonb(r)
FROM (
SELECT
array_agg(t.a) AS a,
array_agg(t.b) AS b
FROM t
) r
Unlike the other queries for this kind of result, array_agg
seems to be a little faster when using to_jsonb
. I suspect this is due to overhead parsing and validating the JSON result of json_agg
.
Or you can use an explicit cast:
SELECT
json_build_object(
'a', json_agg(t.a),
'b', json_agg(t.b)
)::jsonb
FROM t
The to_jsonb
version allows you to avoid the cast and is faster, according to my testing; again, I suspect this is due to overhead of parsing and validating the result.
The json_build_object
function was new to 9.5, so you have to aggregate and convert to an object in previous versions:
SELECT to_json(r)
FROM (
SELECT
json_agg(t.a) AS a,
json_agg(t.b) AS b
FROM t
) r
or
SELECT to_jsonb(r)
FROM (
SELECT
array_agg(t.a) AS a,
array_agg(t.b) AS b
FROM t
) r
depending on whether you want json
or jsonb
.
(9.3 does not have jsonb
.)
In 9.2, not even to_json
exists. You must use row_to_json
:
SELECT row_to_json(r)
FROM (
SELECT
array_agg(t.a) AS a,
array_agg(t.b) AS b
FROM t
) r
Find the documentation for the JSON functions in JSON functions.
json_agg
is on the aggregate functions page.
If performance is important, ensure you benchmark your queries against your own schema and data, rather than trust my testing.
Whether it's a good design or not really depends on your specific application. In terms of maintainability, I don't see any particular problem. It simplifies your app code and means there's less to maintain in that portion of the app. If PG can give you exactly the result you need out of the box, the only reason I can think of to not use it would be performance considerations. Don't reinvent the wheel and all.
Aggregate functions typically give back NULL
when they operate over zero rows. If this is a possibility, you might want to use COALESCE
to avoid them. A couple of examples:
SELECT COALESCE(json_agg(t), '[]'::json) FROM t
Or
SELECT to_jsonb(COALESCE(array_agg(t), ARRAY[]::t[])) FROM t
Credit to Hannes Landeholm for pointing this out
A Docker image packs up the application and environment required by the application to run, and a container is a running instance of the image.
Images are the packing part of Docker, analogous to "source code" or a "program". Containers are the execution part of Docker, analogous to a "process".
In the question, only the "program" part is referred to and that's the image. The "running" part of Docker is the container. When a container is run and changes are made, it's as if the process makes a change in its own source code and saves it as the new image.
I think it is telling you exactly what is wrong. You cannot compare an integer with a varchar. PostgreSQL is strict and does not do any magic typecasting for you. I'm guessing SQLServer does typecasting automagically (which is a bad thing).
If you want to compare these two different beasts, you will have to cast one to the other using the casting syntax ::
.
Something along these lines:
create view view1
as
select table1.col1,table2.col1,table3.col3
from table1
inner join
table2
inner join
table3
on
table1.col4::varchar = table2.col5
/* Here col4 of table1 is of "integer" type and col5 of table2 is of type "varchar" */
/* ERROR: operator does not exist: integer = character varying */
....;
Notice the varchar
typecasting on the table1.col4.
Also note that typecasting might possibly render your index on that column unusable and has a performance penalty, which is pretty bad. An even better solution would be to see if you can permanently change one of the two column types to match the other one. Literately change your database design.
Or you could create a index on the casted values by using a custom, immutable function which casts the values on the column. But this too may prove suboptimal (but better than live casting).
You can raise a notice in Postgres
as follows:
raise notice 'Value: %', deletedContactId;
Read here
Just in case somebody land here... since 8.1 you can simply use:
SELECT user_id
FROM user_logs
WHERE login_date BETWEEN SYMMETRIC '2014-02-01' AND '2014-02-28'
From the docs:
BETWEEN SYMMETRIC is the same as BETWEEN except there is no requirement that the argument to the left of AND be less than or equal to the argument on the right. If it is not, those two arguments are automatically swapped, so that a nonempty range is always implied.
You should use 'tag_name' outside of quotes; then its interpreted as a field of the record. Concatenate using '||' with the literal percent signs:
SELECT id FROM TAG_TABLE WHERE 'aaaaaaaa' LIKE '%' || tag_name || '%';
Please explain why same ng-model
is used? And what value is passed through ng- model
and how it is passed? To be more specific, if I use console.log(color)
what would be the output?
jsonb
in Postgres 9.4+You can use the same query as below, just with jsonb_array_elements()
.
But rather use the jsonb
"contains" operator @>
in combination with a matching GIN index on the expression data->'objects'
:
CREATE INDEX reports_data_gin_idx ON reports
USING gin ((data->'objects') jsonb_path_ops);
SELECT * FROM reports WHERE data->'objects' @> '[{"src":"foo.png"}]';
Since the key objects
holds a JSON array, we need to match the structure in the search term and wrap the array element into square brackets, too. Drop the array brackets when searching a plain record.
More explanation and options:
json
in Postgres 9.3+Unnest the JSON array with the function json_array_elements()
in a lateral join in the FROM
clause and test for its elements:
SELECT data::text, obj
FROM reports r, json_array_elements(r.data#>'{objects}') obj
WHERE obj->>'src' = 'foo.png';
The CTE (WITH
query) just substitutes for a table reports
.
Or, equivalent for just a single level of nesting:
SELECT *
FROM reports r, json_array_elements(r.data->'objects') obj
WHERE obj->>'src' = 'foo.png';
->>
, ->
and #>
operators are explained in the manual.
Both queries use an implicit JOIN LATERAL
.
Closely related:
A simple explanation of the difference between json and jsonb (original image by PostgresProfessional):
SELECT '{"c":0, "a":2,"a":1}'::json, '{"c":0, "a":2,"a":1}'::jsonb;
json | jsonb
------------------------+---------------------
{"c":0, "a":2,"a":1} | {"a": 1, "c": 0}
(1 row)
More in speech video and slide show presentation by jsonb developers. Also they introduced JsQuery, pg.extension provides powerful jsonb query language
GRANT ALL PRIVILEGES ON DATABASE "my_db" to my_user;
Looks like the extension is not installed in the particular database you require it.
You should connect to this particular database with
\CONNECT my_database
Then install the extension in this database
CREATE EXTENSION "uuid-ossp";
To get the key IDs (8 bytes, 16 hex digits), this is the command which worked for me in GPG 1.4.16, 2.1.18 and 2.2.19:
gpg --list-packets <key.asc | awk '$1=="keyid:"{print$2}'
To get some more information (in addition to the key ID):
gpg --list-packets <key.asc
To get even more information:
gpg --list-packets -vvv --debug 0x2 <key.asc
The command
gpg --dry-run --import <key.asc
also works in all 3 versions, but in GPG 1.4.16 it prints only a short (4 bytes, 8 hex digits) key ID, so it's less secure to identify keys.
Some commands in other answers (e.g. gpg --show-keys
, gpg --with-fingerprint
, gpg --import --import-options show-only
) don't work in some of the 3 GPG versions above, thus they are not portable when targeting multiple versions of GPG.
In psql that would be
\dx
See the manual for details: http://www.postgresql.org/docs/current/static/app-psql.html
Doing it in plain SQL it would be a select on pg_extension
:
SELECT *
FROM pg_extension
http://www.postgresql.org/docs/current/static/catalog-pg-extension.html
SELECT last_value, increment_by from "other_schema".id_seq;
for adding a seq to a column where the schema is not public try this.
nextval('"other_schema".id_seq'::regclass)
I got the same error but from a backend job (SSIS job). Upon checking the database's Log file growth setting, the log file was limited growth of 1GB. So what happened is when the job ran and it asked SQL server to allocate more log space, but the growth limit of the log declined caused the job to failed. I modified the log growth and set it to grow by 50MB and Unlimited Growth and the error went away.
You can find the pg_config
directory using its namesake:
$ pg_config --bindir
/usr/lib/postgresql/9.1/bin
$
Tested on Mac and Debian. The only wrinkle is that I can't see how to find the bindir for different versions of postgres installed on the same machine. It's fairly easy to guess though! :-)
Note: I updated my pg_config to 9.5 on Debian with:
sudo apt-get install postgresql-server-dev-9.5
Identifiers (including column names) that are not double-quoted are folded to lower case in PostgreSQL. Column names that were created with double-quotes and thereby retained upper-case letters (and/or other syntax violations) have to be double-quoted for the rest of their life:
"first_Name"
Values (string literals / constants) are enclosed in single quotes:
'xyz'
So, yes, PostgreSQL column names are case-sensitive (when double-quoted):
SELECT * FROM persons WHERE "first_Name" = 'xyz';
Read the manual on identifiers here.
My standing advice is to use legal, lower-case names exclusively so double-quoting is not needed.
The error you quote has nothing to do with pg_hba.conf
; it's failing to connect, not failing to authorize the connection.
Do what the error message says:
Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections
You haven't shown the command that produces the error. Assuming you're connecting on localhost
port 5432
(the defaults for a standard PostgreSQL install), then either:
PostgreSQL isn't running
PostgreSQL isn't listening for TCP/IP connections (listen_addresses
in postgresql.conf
)
PostgreSQL is only listening on IPv4 (0.0.0.0
or 127.0.0.1
) and you're connecting on IPv6 (::1
) or vice versa. This seems to be an issue on some older Mac OS X versions that have weird IPv6 socket behaviour, and on some older Windows versions.
PostgreSQL is listening on a different port to the one you're connecting on
(unlikely) there's an iptables
rule blocking loopback connections
(If you are not connecting on localhost
, it may also be a network firewall that's blocking TCP/IP connections, but I'm guessing you're using the defaults since you didn't say).
So ... check those:
ps -f -u postgres
should list postgres
processes
sudo lsof -n -u postgres |grep LISTEN
or sudo netstat -ltnp | grep postgres
should show the TCP/IP addresses and ports PostgreSQL is listening on
BTW, I think you must be on an old version. On my 9.3 install, the error is rather more detailed:
$ psql -h localhost -p 12345
psql: could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 12345?
If someone is having issues with creating tables by using files dedicated to each model, be aware of running the "create_all" function from a file different from the one where that function is declared. So, if the filesystem is like this:
Root
--app.py <-- file from which app will be run
--models
----user.py <-- file with "User" model
----order.py <-- file with "Order" model
----database.py <-- file with database and "create_all" function declaration
Be careful about calling the "create_all" function from app.py.
This concept is explained better by the answer to this thread posted by @SuperShoot
This is the best way to get a schema dynamically and add it to the different tables within a database in order to get other information dynamically
select @sql = 'insert #tables SELECT ''[''+SCHEMA_NAME(schema_id)+''.''+name+'']'' AS SchemaTable FROM sys.tables'
exec (@sql)
of course #tables is a dynamic table in the stored procedure
You can use
insert into <table_name> select <fieldlist> from <tables>
select column_name,data_type
from information_schema.columns
where table_name = 'table_name';
with the above query you can columns and its datatype
Another solution is add @JsonIgnore :
@OneToMany(mappedBy="catalog", fetch = FetchType.LAZY)
@JsonIgnore
private Set<Product> products = new HashSet<Product>(0);
It is possible using ogr2ogr:
C:\Program Files\PostgreSQL\12\bin\ogr2ogr.exe -f "PostgreSQL" PG:"host=someip user=someuser dbname=somedb password=somepw" C:/folder/excelfile.xlsx -nln newtablenameinpostgres -oo AUTODETECT_TYPE=YES
(Not sure if ogr2ogr is included in postgres installation or if I got it with postgis extension.)
I had the same problem in production (development everything worked), in my case the DB server is not on the same machine as the app, so finally what worked is just to migrate by writing:
bundle exec rake db:migrate RAILS_ENV=production
and then restart the server and everything worked.
If your DB is password protected, then the solution would be:
PGPASSWORD=password psql -U username -d dbname -c "select * from my_table"
When you compare update_date >= '2013-05-03'
postgres casts values to the same type to compare values. So your '2013-05-03' was casted to '2013-05-03 00:00:00'.
So for update_date = '2013-05-03 14:45:00' your expression will be that:
'2013-05-03 14:45:00' >= '2013-05-03 00:00:00' AND '2013-05-03 14:45:00' <= '2013-05-03 00:00:00'
This is always false
To solve this problem cast update_date to date
:
select * from table where update_date::date >= '2013-05-03' AND update_date::date <= '2013-05-03' -> Will return result
You may have a table called 'test'
COPY test(gid, "name", the_geom)
FROM '/home/data/sample.csv'
WITH DELIMITER ','
CSV HEADER
There are basically 4 techniques for this task, all of them standard SQL.
NOT EXISTS
Often fastest in Postgres.
SELECT ip
FROM login_log l
WHERE NOT EXISTS (
SELECT -- SELECT list mostly irrelevant; can just be empty in Postgres
FROM ip_location
WHERE ip = l.ip
);
Also consider:
LEFT JOIN / IS NULL
Sometimes this is fastest. Often shortest. Often results in the same query plan as NOT EXISTS
.
SELECT l.ip
FROM login_log l
LEFT JOIN ip_location i USING (ip) -- short for: ON i.ip = l.ip
WHERE i.ip IS NULL;
EXCEPT
Short. Not as easily integrated in more complex queries.
SELECT ip
FROM login_log
EXCEPT ALL -- "ALL" keeps duplicates and makes it faster
SELECT ip
FROM ip_location;
Note that (per documentation):
duplicates are eliminated unless
EXCEPT ALL
is used.
Typically, you'll want the ALL
keyword. If you don't care, still use it because it makes the query faster.
NOT IN
Only good without NULL
values or if you know to handle NULL
properly. I would not use it for this purpose. Also, performance can deteriorate with bigger tables.
SELECT ip
FROM login_log
WHERE ip NOT IN (
SELECT DISTINCT ip -- DISTINCT is optional
FROM ip_location
);
NOT IN
carries a "trap" for NULL
values on either side:
Similar question on dba.SE targeted at MySQL:
This is the normal behavior and the reason is that your sqlCommandHandlerService.persist
method needs a TX when being executed (because it is marked with @Transactional
annotation). But when it is called inside processNextRegistrationMessage
, because there is a TX available, the container doesn't create a new one and uses existing TX. So if any exception occurs in sqlCommandHandlerService.persist
method, it causes TX to be set to rollBackOnly
(even if you catch the exception in the caller and ignore it).
To overcome this you can use propagation levels for transactions. Have a look at this to find out which propagation best suits your requirements.
Well after a colleague came to me with a couple of questions about a similar situation, I feel this needs a bit of clarification.
Although propagations solve such issues, you should be VERY careful about using them and do not use them unless you ABSOLUTELY understand what they mean and how they work. You may end up persisting some data and rolling back some others where you don't expect them to work that way and things can go horribly wrong.
I had the same problem with Celery. My setting.py before:
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY')
after:
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY', <YOUR developing key>)
If the environment variables are not defined then: SECRET_KEY = YOUR developing key
case when field1>0 then field2/field1 else 0 end as field3
Might I suggest giving QueryStorm a try - it's a plugin for Excel that makes it quite convenient to use SQL in Excel.
Also, it's freemium. If you don't care about autocomplete, error squigglies etc, you can use it for free. Just download and install, and you have SQL support in Excel.
Disclaimer: I'm the author.
For updating multiple rows in a single query, you can try this
UPDATE table_name
SET
column_1 = CASE WHEN any_column = value and any_column = value THEN column_1_value end,
column_2 = CASE WHEN any_column = value and any_column = value THEN column_2_value end,
column_3 = CASE WHEN any_column = value and any_column = value THEN column_3_value end,
.
.
.
column_n = CASE WHEN any_column = value and any_column = value THEN column_n_value end
if you don't need additional condition then remove and
part of this query
To check the status of a service on linux operating system :
//in case of super user(admin) requires
sudo service {service_name} status
// in case of normal user
service {service_name} status
To stop or start service
// in case of admin requires
sudo service {service_name} start/stop
// in case of normal user
service {service_name} start/stop
To get the list of all services along with PID :
sudo service --status-all
You can use systemctl instead of directly calling service :
systemctl status/start/stop {service_name}
Try this:
Open PgAdmin -> Files -> Open pgpass.conf
You would get the path of pgpass.conf
at the bottom of the window.
Go to that location and open this file, you can find your password there.
If the above does not work, you may consider trying this:
1. edit pg_hba.conf to allow trust authorization temporarily
2. Reload the config file (pg_ctl reload)
3. Connect and issue ALTER ROLE / PASSWORD to set the new password
4. edit pg_hba.conf again and restore the previous settings
5. Reload the config file again
I have reproduced the issue in my system,
postgres=# alter user my-sys with password 'pass11';
ERROR: syntax error at or near "-"
LINE 1: alter user my-sys with password 'pass11';
^
Here is the issue,
psql is asking for input and you have given again the alter query see postgres-#
That's why it's giving error at alter
postgres-# alter user "my-sys" with password 'pass11';
ERROR: syntax error at or near "alter"
LINE 2: alter user "my-sys" with password 'pass11';
^
Solution is as simple as the error,
postgres=# alter user "my-sys" with password 'pass11';
ALTER ROLE
This is a complementary answer for the specific case of you using AWS cloud computing (either EC2 or RDS machines).
Besides doing everything proposed above, when using AWS cloud computing you will need to set you inbound rules in a way that let you access to the ports. Please check this post, which is valid for EC2 and RDS.
While this is still a part of Docker that needs some work, you should put the volume in the Dockerfile with the VOLUME instruction so you don't need to copy the volumes from another container.
That will make your containers less inter-dependent and you don't have to worry about the deletion of one container affecting another.
Just create the database using createdb
CLI tool:
PGHOST="my.database.domain.com"
PGUSER="postgres"
PGDB="mydb"
createdb -h $PGHOST -p $PGPORT -U $PGUSER $PGDB
If the database exists, it will return an error:
createdb: database creation failed: ERROR: database "mydb" already exists
It's a very old post, but if it benefits anyone, we can do something like this:
Long max=((BigInteger) Collections.max(dynamics)).longValue();
All in python (using psycopg2
), create the empty table first then use copy_expert
to load the csv into it. It should handle for empty values.
import psycopg2
conn = psycopg2.connect(host="hosturl", database="db_name", user="username", password="password")
cur = conn.cursor()
cur.execute("CREATE TABLE schema.destination_table ("
"age integer, "
"first_name varchar(20), "
"last_name varchar(20)"
");")
with open(r'C:/tmp/people.csv', 'r') as f:
next(f) # Skip the header row. Or remove this line if csv has no header.
conn.cursor.copy_expert("""COPY schema.destination_table FROM STDIN WITH (FORMAT CSV)""", f)
You can connect to psql as below and write your sql queries like you do in a regular postgres function within the block. There, bash variables can be used. However, the script should be strictly sql, even for comments you need to use -- instead of #:
#!/bin/bash
psql postgresql://<user>:<password>@<host>/<db> << EOF
<your sql queries go here>
EOF
UPDATE table_name SET attrs = jsonb_set(cast(attrs as jsonb), '{key}', '"new_value"', true) WHERE id = 'some_id';
This what worked for me, attrs is a json type field. first cast to jsonb then update.
or
UPDATE table_name SET attrs = jsonb_set(cast(attrs as jsonb), '{key}', '"new_value"', true) WHERE attrs->>key = 'old_value';
This occurs if the password for the database is not given.
default="postgres://postgres:[email protected]:5432/DBname"
I spent some time looking for the best answer, and I think I have it.
This sql will give you the number of days between two dates as integer
:
SELECT
(EXTRACT(epoch from age('2017-6-15', now())) / 86400)::int
..which, when run today (2017-3-28
), provides me with:
?column?
------------
77
The misconception about the accepted answer:
select age('2010-04-01', '2012-03-05'),
date_part('year',age('2010-04-01', '2012-03-05')),
date_part('month',age('2010-04-01', '2012-03-05')),
date_part('day',age('2010-04-01', '2012-03-05'));
..is that you will get the literal difference between the parts of the date strings, not the amount of time between the two dates.
I.E:
Age(interval)=-1 years -11 mons -4 days;
Years(double precision)=-1;
Months(double precision)=-11;
Days(double precision)=-4;
I have been using this function for a while with no problems. You just need to provide the number columns there are in the csv file, and it will take the header names from the first row and create the table for you:
create or replace function data.load_csv_file
(
target_table text, -- name of the table that will be created
csv_file_path text,
col_count integer
)
returns void
as $$
declare
iter integer; -- dummy integer to iterate columns with
col text; -- to keep column names in each iteration
col_first text; -- first column name, e.g., top left corner on a csv file or spreadsheet
begin
set schema 'data';
create table temp_table ();
-- add just enough number of columns
for iter in 1..col_count
loop
execute format ('alter table temp_table add column col_%s text;', iter);
end loop;
-- copy the data from csv file
execute format ('copy temp_table from %L with delimiter '','' quote ''"'' csv ', csv_file_path);
iter := 1;
col_first := (select col_1
from temp_table
limit 1);
-- update the column names based on the first row which has the column names
for col in execute format ('select unnest(string_to_array(trim(temp_table::text, ''()''), '','')) from temp_table where col_1 = %L', col_first)
loop
execute format ('alter table temp_table rename column col_%s to %s', iter, col);
iter := iter + 1;
end loop;
-- delete the columns row // using quote_ident or %I does not work here!?
execute format ('delete from temp_table where %s = %L', col_first, col_first);
-- change the temp table name to the name given as parameter, if not blank
if length (target_table) > 0 then
execute format ('alter table temp_table rename to %I', target_table);
end if;
end;
$$ language plpgsql;
Had the same problem, a simple psql -d postgres
did it (Type the command in the terminal)
bma answer is great! I have used it with ActiveRecords, here it is if anybody needs it in Rails:
Model.find_by_sql(
"SELECT TO_CHAR(created_at, 'Mon') AS month,
EXTRACT(year from created_at) as year,
SUM(desired_value) as desired_value
FROM desired_table
GROUP BY 1,2
ORDER BY 1,2"
)
REVOKE CONNECT
will not prevent the connections from the db owner or superuser. So if you don't want anyone to connect the db, follow command may be useful.
alter database pilot allow_connections = off;
Then use:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'pilot';
Well, this is my final solution for a simple db, for Linux:
# Read this before!
#
# * roles in postgres are users, and can be used also as group of users
# * $ROLE_LOCAL will be the user that access the db for maintenance and
# administration. $ROLE_REMOTE will be the user that access the db from the webapp
# * you have to change '$ROLE_LOCAL', '$ROLE_REMOTE' and '$DB'
# strings with your desired names
# * it's preferable that $ROLE_LOCAL == $DB
#-------------------------------------------------------------------------------
//----------- SKIP THIS PART UNTIL POSTGRES JDBC ADDS SCRAM - START ----------//
cd /etc/postgresql/$VERSION/main
sudo cp pg_hba.conf pg_hba.conf_bak
sudo -e pg_hba.conf
# change all `md5` with `scram-sha-256`
# save and exit
//------------ SKIP THIS PART UNTIL POSTGRES JDBC ADDS SCRAM - END -----------//
sudo -u postgres psql
# in psql:
create role $ROLE_LOCAL login createdb;
\password $ROLE_LOCAL
create role $ROLE_REMOTE login;
\password $ROLE_REMOTE
create database $DB owner $ROLE_LOCAL encoding "utf8";
\connect $DB $ROLE_LOCAL
# Create all tables and objects, and after that:
\connect $DB postgres
revoke connect on database $DB from public;
revoke all on schema public from public;
revoke all on all tables in schema public from public;
grant connect on database $DB to $ROLE_LOCAL;
grant all on schema public to $ROLE_LOCAL;
grant all on all tables in schema public to $ROLE_LOCAL;
grant all on all sequences in schema public to $ROLE_LOCAL;
grant all on all functions in schema public to $ROLE_LOCAL;
grant connect on database $DB to $ROLE_REMOTE;
grant usage on schema public to $ROLE_REMOTE;
grant select, insert, update, delete on all tables in schema public to $ROLE_REMOTE;
grant usage, select on all sequences in schema public to $ROLE_REMOTE;
grant execute on all functions in schema public to $ROLE_REMOTE;
alter default privileges for role $ROLE_LOCAL in schema public
grant all on tables to $ROLE_LOCAL;
alter default privileges for role $ROLE_LOCAL in schema public
grant all on sequences to $ROLE_LOCAL;
alter default privileges for role $ROLE_LOCAL in schema public
grant all on functions to $ROLE_LOCAL;
alter default privileges for role $ROLE_REMOTE in schema public
grant select, insert, update, delete on tables to $ROLE_REMOTE;
alter default privileges for role $ROLE_REMOTE in schema public
grant usage, select on sequences to $ROLE_REMOTE;
alter default privileges for role $ROLE_REMOTE in schema public
grant execute on functions to $ROLE_REMOTE;
# CTRL+D
I am trying to contribute with another solution for the single insertion problem with the pre-9.5 versions of PostgreSQL. The idea is simply to try to perform first the insertion, and in case the record is already present, to update it:
do $$
begin
insert into testtable(id, somedata) values(2,'Joe');
exception when unique_violation then
update testtable set somedata = 'Joe' where id = 2;
end $$;
Note that this solution can be applied only if there are no deletions of rows of the table.
I do not know about the efficiency of this solution, but it seems to me reasonable enough.
In my case, it was working in x86 but not in x64.
It quite ridiculous, but in x64 the following change had to be added before it would work:
x86 -> szDsn = "DRIVER={MICROSOFT ACCESS DRIVER (*.mdb)};
x64 -> szDsn = "DRIVER={MICROSOFT ACCESS DRIVER (*.mdb, *.accdb)};
Note the addition of *.accdb
.
NOW() is the replacement of Oracle Sysdate in Postgres.
Try "Select now()", it will give you the system timestamp.
Another approach is to use the first_value
window function: http://sqlfiddle.com/#!12/7a145/14
SELECT DISTINCT
first_value("id") OVER (PARTITION BY "category" ORDER BY "date" DESC)
FROM Table1
ORDER BY 1;
... though I suspect hims056's suggestion will typically perform better where appropriate indexes are present.
A third solution is:
SELECT
id
FROM (
SELECT
id,
row_number() OVER (PARTITION BY "category" ORDER BY "date" DESC) AS rownum
FROM Table1
) x
WHERE rownum = 1;
Instead of using a sheet name, in case you don't know or can't open the excel file to check in ubuntu (in my case, Python 3.6.7, ubuntu 18.04), I use the parameter index_col (index_col=0 for the first sheet)
import pandas as pd
file_name = 'some_data_file.xlsx'
df = pd.read_excel(file_name, index_col=0)
print(df.head()) # print the first 5 rows
Wrap it in a function:
create function now_utc() returns timestamp as $$
select now() at time zone 'utc';
$$ language sql;
create temporary table test(
id int,
ts timestamp without time zone default now_utc()
);
The solution above not working for the latest version on PostgreSQL. I found this way to convert epoch time being stored in number and int column type is on PostgreSQL 13:
SELECT TIMESTAMP 'epoch' + (<table>.field::int) * INTERVAL '1 second' as started_on from <table>;
For more detail explanation, you can see here https://www.yodiw.com/convert-epoch-time-to-timestamp-in-postgresql/#more-214
If the project is maven placed it in src/main/resources
, in the package phase it will copy it in ../WEB-INF/classes/hibernate.cfg.xml
On Ubuntu 18.04:
sudo systemctl restart postgresql.service
Your function would work like this:
CREATE OR REPLACE FUNCTION prc_tst_bulk(sql text)
RETURNS TABLE (name text, rowcount integer) AS
$$
BEGIN
RETURN QUERY EXECUTE '
WITH v_tb_person AS (' || sql || $x$)
SELECT name, count(*)::int FROM v_tb_person WHERE nome LIKE '%a%' GROUP BY name
UNION
SELECT name, count(*)::int FROM v_tb_person WHERE gender = 1 GROUP BY name$x$;
END
$$ LANGUAGE plpgsql;
Call:
SELECT * FROM prc_tst_bulk($$SELECT a AS name, b AS nome, c AS gender FROM tbl$$)
You cannot mix plain and dynamic SQL the way you tried to do it. The whole statement is either all dynamic or all plain SQL. So I am building one dynamic statement to make this work. You may be interested in the chapter about executing dynamic commands in the manual.
The aggregate function count()
returns bigint
, but you had rowcount
defined as integer
, so you need an explicit cast ::int
to make this work
I use dollar quoting to avoid quoting hell.
However, is this supposed to be a honeypot for SQL injection attacks or are you seriously going to use it? For your very private and secure use, it might be ok-ish - though I wouldn't even trust myself with a function like that. If there is any possible access for untrusted users, such a function is a loaded footgun. It's impossible to make this secure.
Craig (a sworn enemy of SQL injection!) might get a light stroke, when he sees what you forged from his piece of code in the answer to your preceding question. :)
The query itself seems rather odd, btw. But that's beside the point here.
Make sure that the attribute is primary key and Auto Incrementable in the database. Then map it into the data class with the annotation with @GeneratedValue
annotation using IDENTITY.
@Entity
@Table(name = "client")
data class Client(
@Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "id") private val id: Int? = null
)
GL
My Answer: All of the following should be overridden (i.e. describe them all within columndefinition
, if appropriate):
length
precision
scale
nullable
unique
i.e. the column DDL will consist of: name
+ columndefinition
and nothing else.
Rationale follows.
Annotation containing the word "Column" or "Table" is purely physical - properties only used to control DDL/DML against database.
Other annotation purely logical - properties used in-memory in java to control JPA processing.
That's why sometimes it appears the optionality/nullability is set twice - once via @Basic(...,optional=true)
and once via @Column(...,nullable=true)
. Former says attribute/association can be null in the JPA object model (in-memory), at flush time; latter says DB column can be null. Usually you'd want them set the same - but not always, depending on how the DB tables are setup and reused.
In your example, length and nullable properties are overridden and redundant.
So, when specifying columnDefinition, what other properties of @Column are made redundant?
In JPA Spec & javadoc:
columnDefinition
definition:
The SQL fragment that is used when generating the DDL for the column.
columnDefinition
default:
Generated SQL to create a column of the inferred type.
The following examples are provided:
@Column(name="DESC", columnDefinition="CLOB NOT NULL", table="EMP_DETAIL")
@Column(name="EMP_PIC", columnDefinition="BLOB NOT NULL")
And, err..., that's it really. :-$ ?!
Does columnDefinition override other properties provided in the same annotation?
The javadoc and JPA spec don't explicity address this - spec's not giving great protection. To be 100% sure, test with your chosen implementation.
The following can be safely implied from examples provided in the JPA spec
name
& table
can be used in conjunction with columnDefinition
, neither are overriddennullable
is overridden/made redundant by columnDefinition
The following can be fairly safely implied from the "logic of the situation" (did I just say that?? :-P ):
length
, precision
, scale
are overridden/made redundant by the columnDefinition
- they are integral to the typeinsertable
and updateable
are provided separately and never included in columnDefinition
, because they control SQL generation in-memory, before it is emmitted to the database.That leaves just the "unique
" property. It's similar to nullable - extends/qualifies the type definition, so should be treated integral to type definition. i.e. should be overridden.
Test My Answer For columns "A" & "B", respectively:
@Column(name="...", table="...", insertable=true, updateable=false,
columndefinition="NUMBER(5,2) NOT NULL UNIQUE"
@Column(name="...", table="...", insertable=false, updateable=true,
columndefinition="NVARCHAR2(100) NULL"
Agree with neubert about the DECLARE statements, this will fix syntax error. But I would suggest you to avoid using openning cursors, they may be slow.
For your task: use INSERT...SELECT statement which will help you to copy data from one table to another using only one query.
Maybe the following is what you are looking for:
SELECT name, pathfilename
FROM table1
NATURAL JOIN table2
NATURAL JOIN table3
WHERE name = 'John';
Alternatively to information_schema
it is possible to use pg_tables
:
select * from pg_tables where schemaname='public';
in my own case in django 1.10.1 running on python2.7.11, I was trying to start the server using django-admin runserver
instead of manage.py runserver
in my project directory.
First, create staticfiles folder. Inside that folder create css, js, and img folder.
settings.py
import os
PROJECT_DIR = os.path.dirname(__file__)
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(PROJECT_DIR, 'myweblabdev.sqlite'),
'USER': '',
'PASSWORD': '',
'HOST': '',
'PORT': '',
}
}
MEDIA_ROOT = os.path.join(PROJECT_DIR, 'media')
MEDIA_URL = '/media/'
STATIC_ROOT = os.path.join(PROJECT_DIR, 'static')
STATIC_URL = '/static/'
STATICFILES_DIRS = (
os.path.join(PROJECT_DIR, 'staticfiles'),
)
main urls.py
from django.conf.urls import patterns, include, url
from django.conf.urls.static import static
from django.contrib import admin
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
from myweblab import settings
admin.autodiscover()
urlpatterns = patterns('',
.......
) + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
urlpatterns += staticfiles_urlpatterns()
template
{% load static %}
<link rel="stylesheet" href="{% static 'css/style.css' %}">
python manage.py flush
deleted old db contents,
Don't forget to create new superuser:
python manage.py createsuperuser
On Ubuntu it is advised to use the distributions repository.
sudo apt-get install python-mysqldb
I had one view that threw a 500 error in debug=false but worked in debug=true. For anyone who is getting this kind of thing and Allowed Hosts is not the problem, I fixed my view by updating a template's static tag that was pointing to the wrong location.
So I'd suggest just checking links and tags are airtight in any templates used, maybe certain things slip through the net in debug but give errors in production.
I ran into this problem as well, it ended up that I had two postgres servers running at the same time. I uninstalled one of them and changed the port back to 5432 and works fine now.
In the case of nested tables, some DBMS require to use an alias like MySQL and Oracle but others do not have such a strict requirement, but still allow to add them to substitute the result of the inner query.
Try this:
SELECT table_name
FROM information_schema.tables
WHERE table_schema='public' AND table_type='BASE TABLE'
this one works!
select definition from pg_views where viewname = 'my_view'
N.B. - this question and answer relate to the 2000 version of SQL Server. In later versions, the restriction on INSERT INTO @table_variable ... EXEC ...
were lifted and so it doesn't apply for those later versions.
You'll have to switch to a temp table:
CREATE TABLE #tmp (code varchar(50), mount money)
DECLARE @q nvarchar(4000)
SET @q = 'SELECT coa_code, amount FROM T_Ledger_detail'
INSERT INTO #tmp (code, mount)
EXEC sp_executesql (@q)
SELECT * from #tmp
From the documentation:
A table variable behaves like a local variable. It has a well-defined scope, which is the function, stored procedure, or batch in which it is declared.
Within its scope, a table variable may be used like a regular table. It may be applied anywhere a table or table expression is used in SELECT, INSERT, UPDATE, and DELETE statements. However, table may not be used in the following statements:
INSERT INTO table_variable EXEC stored_procedure
SELECT select_list INTO table_variable statements.
As stated here about hot_standby_feedback = on
:
Well, the disadvantage of it is that the standby can bloat the master, which might be surprising to some people, too
And here:
With what setting of max_standby_streaming_delay? I would rather default that to -1 than default hot_standby_feedback on. That way what you do on the standby only affects the standby
So I added
max_standby_streaming_delay = -1
And no more pg_dump
error for us, nor master bloat :)
For AWS RDS instance, check http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html
Assuming you're the administrator of the machine, Ubuntu has granted you the right to sudo to run any command as any user.
Also assuming you did not restrict the rights in the pg_hba.conf
file (in the /etc/postgresql/9.1/main
directory), it should contain this line as the first rule:
# Database administrative login by Unix domain socket
local all postgres peer
(About the file location: 9.1
is the major postgres version and main
the name of your "cluster". It will differ if using a newer version of postgres or non-default names. Use the pg_lsclusters
command to obtain this information for your version/system).
Anyway, if the pg_hba.conf
file does not have that line, edit the file, add it, and reload the service with sudo service postgresql reload
.
Then you should be able to log in with psql
as the postgres superuser with this shell command:
sudo -u postgres psql
Once inside psql, issue the SQL command:
ALTER USER postgres PASSWORD 'newpassword';
In this command, postgres
is the name of a superuser. If the user whose password is forgotten was ritesh
, the command would be:
ALTER USER ritesh PASSWORD 'newpassword';
References: PostgreSQL 9.1.13 Documentation, Chapter 19. Client Authentication
Keep in mind that you need to type postgres with a single S at the end
If leaving the password in clear text in the history of commands or the server log is a problem, psql provides an interactive meta-command to avoid that, as an alternative to ALTER USER ... PASSWORD
:
\password username
It asks for the password with a double blind input, then hashes it according to the password_encryption
setting and issue the ALTER USER
command to the server with the hashed version of the password, instead of the clear text version.
I came across this question, and the answers here didn't work for me; i couldn't figure out why i can't login and got the above error.
It turns out that postgresql saves usernames lowercase, but during authentication it uses both upper- and lowercase.
CREATE USER myNewUser WITH PASSWORD 'passWord';
will create a user with the username 'mynewuser' and password 'passWord'.
This means you have to authenticate with 'mynewuser', and not with 'myNewUser'. For a newbie in pgsql like me, this was confusing. I hope it helps others who run into this problem.
You should take a look at pg_dump
:
pg_dump -s databasename
Will dump only the schema to stdout as .sql.
For windows, you'll probably want to call pg_dump.exe
. I don't have access to a Windows machine but I'm pretty sure from memory that's the command. See if the help works for you too.
If, like me, you landed here with:
Here is what worked for me, utilizing one of the above answers and expanding it:
-- up
ALTER TABLE myoldtable ADD COLUMN newcolumn TEXT;
ALTER TABLE myoldtable ADD CONSTRAINT myoldtable_oldcolumn_newcolumn_key UNIQUE (oldcolumn, newcolumn);
---
ALTER TABLE myoldtable DROP CONSTRAINT myoldtable_oldcolumn_newcolumn_key;
ALTER TABLE myoldtable DROP COLUMN newcolumn;
-- down
It's been a while since this question was asked, hope can help. Because you can not change or alter the db structure, you can do this. according the postgresql docs.
TRUNCATE -- empty a table or set of tables.
TRUNCATE [ TABLE ] [ ONLY ] name [ * ] [, ... ]
[ RESTART IDENTITY | CONTINUE IDENTITY ] [ CASCADE | RESTRICT ]
Description
TRUNCATE quickly removes all rows from a set of tables. It has the same effect as an unqualified DELETE on each table, but since it does not actually scan the tables it is faster. Furthermore, it reclaims disk space immediately, rather than requiring a subsequent VACUUM operation. This is most useful on large tables.
Truncate the table othertable, and cascade to any tables that reference othertable via foreign-key constraints:
TRUNCATE othertable CASCADE;
The same, and also reset any associated sequence generators:
TRUNCATE bigtable, fattable RESTART IDENTITY;
Truncate and reset any associated sequence generators:
TRUNCATE revinfo RESTART IDENTITY CASCADE ;
In my humble experience with postgres 9.6, cascade delete doesn't work in practice for tables that grow above a trivial size.
psql -h <host> -p <port> -U <user> -d <db>
and update /var/lib/pgsql/10/data/pg_hba.conf
to change the auth method to cert
. Check the following link for more information:
select age(timestamp_A, timestamp_B)
Answering to Igor's comment:
select age('2013-02-28 11:01:28'::timestamp, '2011-12-31 11:00'::timestamp);
age
-------------------------------
1 year 1 mon 28 days 00:01:28
File permissions are restrictive on the Postgres db owned by the Mac OS. These permissions are reset after reboot, or restart of Postgres: e.g. serveradmin start postgres.
So, temporarily reset the permissions or ownership:
sudo chmod o+rwx /var/pgsql_socket/.s.PGSQL.5432
sudo chown "webUser" /var/pgsql_socket/.s.PGSQL.5432
Permissions resetting is not secure, so install a version of the db that you own for a solution.
You can cast an integer to a string in this way
intval::text
and so in your case
SELECT * FROM table WHERE <some integer>::text = 'string of numbers'
I was facing similar problems trying to do some dat import on an intermediate schema (that later we move on to the final one). As we rely on things like extensions (for example PostGIS), the "run_insert" sql file did not fully solved the problem.
After a while, we've found that at least with Postgres 9.3 the solution is far easier... just create your SQL script always specifying the schema when refering to the table:
CREATE TABLE "my_schema"."my_table" (...);
COPY "my_schema"."my_table" (...) FROM stdin;
This way using psql -f xxxxx
works perfectly, and you don't need to change search_paths nor use intermediate files (and won't hit extension schema problems).
If you're using CLI and you're a postgres
user, then you can do this:
psql -c "SELECT version();"
Possible output:
version
-------------------------------------------------------------------------------------------------------------------------
PostgreSQL 11.1 (Debian 11.1-3.pgdg80+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10+deb8u2) 4.9.2, 64-bit
(1 row)
(SELECT <some columns>
FROM mytable
<maybe some joins here>
WHERE <various conditions>
ORDER BY date DESC
LIMIT 10)
UNION ALL
(SELECT <some columns>
FROM mytable
<maybe some joins here>
WHERE <various conditions>
ORDER BY date ASC
LIMIT 10)
ALTER TABLE person ALTER COLUMN phone DROP NOT NULL;
More details in the manual: http://www.postgresql.org/docs/9.1/static/sql-altertable.html
Another class of reasons why this can happen is due to Postgres version updates.
You can confirm this is a problem by looking at the postgres logs:
tail -n 10000 /usr/local/var/log/postgres.log
and seeing entries like:
DETAIL: The data directory was initialized by PostgreSQL version 12, which is not compatible with this version 13.0.
In this case (assuming you are on Mac and using brew
), just run:
brew postgresql-upgrade-database
(Oddly, it failed on run 1 and worked on run 2, so try it twice before giving up)
You should execute the next query:
GRANT ALL ON TABLE mytable TO myuser;
Or if your error is in a view then maybe the table does not have permission, so you should execute the next query:
GRANT ALL ON TABLE tbm_grupo TO myuser;
By specifying the column as VARCHAR(500)
you've set an explicit 500 character limit. You might not have done this yourself explicitly, but Django has done it for you somewhere. Telling you where is hard when you haven't shown your model, the full error text, or the query that produced the error.
If you don't want one, use an unqualified VARCHAR
, or use the TEXT
type.
varchar
and text
are limited in length only by the system limits on column size - about 1GB - and by your memory. However, adding a length-qualifier to varchar
sets a smaller limit manually. All of the following are largely equivalent:
column_name VARCHAR(500)
column_name VARCHAR CHECK (length(column_name) <= 500)
column_name TEXT CHECK (length(column_name) <= 500)
The only differences are in how database metadata is reported and which SQLSTATE is raised when the constraint is violated.
The length constraint is not generally obeyed in prepared statement parameters, function calls, etc, as shown:
regress=> \x
Expanded display is on.
regress=> PREPARE t2(varchar(500)) AS SELECT $1;
PREPARE
regress=> EXECUTE t2( repeat('x',601) );
-[ RECORD 1 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
?column? | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
and in explicit casts it result in truncation:
regress=> SELECT repeat('x',501)::varchar(1);
-[ RECORD 1 ]
repeat | x
so I think you are using a VARCHAR(500)
column, and you're looking at the wrong table or wrong instance of the database.
This is actually what you are supposed to do:
you should instead look at /usr/local/var/postgres/postmaster.pid
and then look at the first line of the file - this is the bad PID
Run
ps aux | grep <PID>
for example:
ps aux | grep 12345
then do
kill <PID>
for example
kill 12345
Assuming it's still running
https://superuser.com/questions/553045/fatal-lock-file-postmaster-pid-already-exists
do not listen to the accepted answer it is bad and will corrupt your data!!!
Shorter answer: ignore it.
This module is the part of Postgres that processes the SQL language. The error will often pop up as part of copying a remote database, such as with a 'heroku pg:pull'. It does not overwrite your SQL processor and warns you about that.
If your are not in a session you can just nextval('you_sequence_name') and it's just fine.
Depending on your PostgreSQL version you would need to add the postgresql driver to your pom.xml
file.
For PostgreSQL 9.1 this would be:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<name>Your project name.</name>
<dependencies>
<dependency>
<groupId>postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.1-901-1.jdbc4</version>
</dependency>
</dependencies>
</project>
You can get the code for the dependency (as well as any other dependency) from maven's central repository
If you are using postgresql 9.2+:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<name>Your project name.</name>
<dependencies>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.1</version>
</dependency>
</dependencies>
</project>
You can check the latest versions and dependency snippets from:
In PostgreSQL 9.6, there's a new option idle_in_transaction_session_timeout
which should accomplish what you describe. You can set it using the SET
command, e.g.:
SET SESSION idle_in_transaction_session_timeout = '5min';
-- IN arguments : you get them. You can modify them locally but caller won't see it
-- IN OUT arguments: initialized by caller, already have a value, you can modify them and the caller will see it
-- OUT arguments: they're reinitialized by the procedure, the caller will see the final value.
CREATE PROCEDURE f (p IN NUMBER, x IN OUT NUMBER, y OUT NUMBER)
IS
BEGIN
x:=x * p;
y:=4 * p;
END;
/
SET SERVEROUTPUT ON
declare
foo number := 30;
bar number := 0;
begin
f(5,foo,bar);
dbms_output.put_line(foo || ' ' || bar);
end;
/
-- Procedure output can be collected from variables x and y (ans1:= x and ans2:=y) will be: 150 and 20 respectively.
-- Answer borrowed from: https://stackoverflow.com/a/9484228/1661078
The content of the table/tables in PostgreSQL database can be deleted in several ways.
Deleting table content using sql:
Deleting content of one table:
TRUNCATE table_name;
DELETE FROM table_name;
Deleting content of all named tables:
TRUNCATE table_a, table_b, …, table_z;
Deleting content of named tables and tables that reference to them (I will explain it in more details later in this answer):
TRUNCATE table_a, table_b CASCADE;
Deleting table content using pgAdmin:
Deleting content of one table:
Right click on the table -> Truncate
Deleting content of table and tables that reference to it:
Right click on the table -> Truncate Cascaded
Difference between delete and truncate:
From the documentation:
DELETE deletes rows that satisfy the WHERE clause from the specified table. If the WHERE clause is absent, the effect is to delete all rows in the table. http://www.postgresql.org/docs/9.3/static/sql-delete.html
TRUNCATE is a PostgreSQL extension that provides a faster mechanism to remove all rows from a table. TRUNCATE quickly removes all rows from a set of tables. It has the same effect as an unqualified DELETE on each table, but since it does not actually scan the tables it is faster. Furthermore, it reclaims disk space immediately, rather than requiring a subsequent VACUUM operation. This is most useful on large tables. http://www.postgresql.org/docs/9.1/static/sql-truncate.html
Working with table that is referenced from other table:
When you have database that has more than one table the tables have probably relationship. As an example there are three tables:
create table customers (
customer_id int not null,
name varchar(20),
surname varchar(30),
constraint pk_customer primary key (customer_id)
);
create table orders (
order_id int not null,
number int not null,
customer_id int not null,
constraint pk_order primary key (order_id),
constraint fk_customer foreign key (customer_id) references customers(customer_id)
);
create table loyalty_cards (
card_id int not null,
card_number varchar(10) not null,
customer_id int not null,
constraint pk_card primary key (card_id),
constraint fk_customer foreign key (customer_id) references customers(customer_id)
);
And some prepared data for these tables:
insert into customers values (1, 'John', 'Smith');
insert into orders values
(10, 1000, 1),
(11, 1009, 1),
(12, 1010, 1);
insert into loyalty_cards values (100, 'A123456789', 1);
Table orders references table customers and table loyalty_cards references table customers. When you try to TRUNCATE / DELETE FROM the table that is referenced by other table/s (the other table/s has foreign key constraint to the named table) you get an error. To delete content from all three tables you have to name all these tables (the order is not important)
TRUNCATE customers, loyalty_cards, orders;
or just the table that is referenced with CASCADE key word (you can name more tables than just one)
TRUNCATE customers CASCADE;
The same applies for pgAdmin. Right click on customers table and choose Truncate Cascaded.
SELECT ROUND(SUM(amount)::numeric, 2) AS total_amount
FROM transactions
Gives: 200234.08
Using Hibernate :
@Transactional(readOnly=true)
public void accessUser() {
EntityManager em = repo.getEntityManager();
org.hibernate.Session session = em.unwrap(org.hibernate.Session.class);
org.hibernate.SQLQuery q = (org.hibernate.SQLQuery) session.createSQLQuery("SELECT u.username, u.name, u.email, 'blabla' as passe, login_type as loginType FROM users u").addScalar("username", StringType.INSTANCE).addScalar("name", StringType.INSTANCE).addScalar("email", StringType.INSTANCE).addScalar("passe", StringType.INSTANCE).addScalar("loginType", IntegerType.INSTANCE)
.setResultTransformer(Transformers.aliasToBean(User2DTO.class));
List<User2DTO> userList = q.list();
}
Use LIKE ANY(ARRAY['AAA%', 'BBB%', 'CCC%'])
as per this cool trick @maniek showed earlier today.
For Macs with Homebrew. I had this problem when fetching the db from Heroku. I've fixed it just running:
brew upgrade postgresql
You can make view of the table with specifit records and then dump sql file
CREATE VIEW foo AS
SELECT id,name,city FROM nyummy.cimory WHERE city = 'tokyo'
pip3 install django-psycopg2-extension
I know i am late and there's lot of answers up here which also solves the problem. But today i also faced this problem and none of this helps me. Then i found the above magical command which solves my problem :-P . so i am posting this as it might be case for you too.
Happy coding.
Then type:
$ sudo -u postgres psql
Then:
\password postgres
Then to quit psql
:
\q
If that does not work, reconfigure authentication.
Edit /etc/postgresql/9.1/main/pg_hba.conf
(path will differ) and change:
local all all peer
to:
local all all md5
Then restart the server:
$ sudo service postgresql restart
the below function will check the column if exist return appropriate message else it will add the column to the table.
create or replace function addcol(schemaname varchar, tablename varchar, colname varchar, coltype varchar)
returns varchar
language 'plpgsql'
as
$$
declare
col_name varchar ;
begin
execute 'select column_name from information_schema.columns where table_schema = ' ||
quote_literal(schemaname)||' and table_name='|| quote_literal(tablename) || ' and column_name= '|| quote_literal(colname)
into col_name ;
raise info ' the val : % ', col_name;
if(col_name is null ) then
col_name := colname;
execute 'alter table ' ||schemaname|| '.'|| tablename || ' add column '|| colname || ' ' || coltype;
else
col_name := colname ||' Already exist';
end if;
return col_name;
end;
$$
PostgreSQL 13 supports natively gen_random_uuid ():
PostgreSQL includes one function to generate a UUID:
gen_random_uuid () ? uuid
This function returns a version 4 (random) UUID. This is the most commonly used type of UUID and is appropriate for most applications.
I don't really know Mac or Homebrew, but I know PostgreSQL very well.
You want to figure out where the logs are from PostgreSQL trying to start and what the socket directory is for PostgreSQL. By default when you build PG, the socket directory is /tmp/. If you didn't change that when you built PG and then you started PG, you should be able to see a socket file in /tmp if you do: ls -al /tmp
The socket file starts with a ".", so you won't see it with the '-a' to ls.
If you don't see a socket there, and you don't see anything from ps awux | grep postgres, then PG is probably not running, or maybe it is and it's the OSX-installed one. What might be happening is that you might be getting a conflict on listening on port 5432 on localhost- use netstat -anp to see what, if anything, is listening on 5432. If a Mac OSX PG is already listening on that port then that might be the problem.
Hope that helps. I have heard that homebrew can make things a bit ugly and a lot of people I've talked to encourage using a VM instead.
I think you're looking for SELECT INTO
:
select test_table.name into name from test_table where id = x;
That will pull the name
from test_table
where id
is your function's argument and leave it in the name
variable. Don't leave out the table name prefix on test_table.name
or you'll get complaints about an ambiguous reference.
Escaping single quotes '
by doubling them up -> ''
is the standard way and works of course:
'user's log' -- incorrect syntax (unbalanced quote)
'user''s log'
In old versions or if you still run with standard_conforming_strings = off
or, generally, if you prepend your string with E
to declare Posix escape string syntax, you can also escape with the backslash \
:
E'user\'s log'
Backslash itself is escaped with another backslash. But that's generally not preferable.
If you have to deal with many single quotes or multiple layers of escaping, you can avoid quoting hell in PostgreSQL with dollar-quoted strings:
'escape '' with '''''
$$escape ' with ''$$
To further avoid confusion among dollar-quotes, add a unique token to each pair:
$token$escape ' with ''$token$
Which can be nested any number of levels:
$token2$Inner string: $token1$escape ' with ''$token1$ is nested$token2$
Pay attention if the $
character should have special meaning in your client software. You may have to escape it in addition. This is not the case with standard PostgreSQL clients like psql or pgAdmin.
That is all very useful for writing plpgsql functions or ad-hoc SQL commands. It cannot alleviate the need to use prepared statements or some other method to safeguard against SQL injection in your application when user input is possible, though. @Craig's answer has more on that. More details:
When dealing with values inside the database, there are a couple of useful functions to quote strings properly:
quote_literal()
or quote_nullable()
- the latter outputs the string NULL
for null input. (There is also quote_ident()
to double-quote strings where needed to get valid SQL identifiers.)format()
with the format specifier %L
is equivalent to quote_nullable()
.format('%L', string_var)
concat()
concat_ws()
Did you check the string concatenation function? Something like:
update table_c set column_a = column_b || column_c
should work. More here
For optimal Insertion performance disable the index if that's an option for you. Other than that, better hardware (disk, memory) is also helpful
SELECT tab.*,
row_number() OVER () as rnum
FROM tab;
Here's the relevant section in the docs.
P.S. This, in fact, fully matches the answer in the referenced question.
Just for future reference, if you already have a boolean column and you just want to add a default do:
ALTER TABLE users
ALTER COLUMN priv_user SET DEFAULT false;
The code is okay but you are in the wrong directory. You must run these commands inside your rails project-directory.
The normal way to get there from scratch is:
$ rails new PROJECT_NAME
$ cd PROJECT_NAME
$ rails generate model ad \
name:string \
description:text \
price:decimal \
seller_id:integer \
email:string img_url:string
you can use this
static final String JDBC_DRIVER = "com.mysql.jdbc.Driver";
static final String DB_URL = "jdbc:mysql://localhost:3306/YOUR_DB_NAME";
static final String USER = "root";
static final String PASS = "YOUR_ROOT_PASSWORD";
Connection conn = DriverManager.getConnection(DB_URL,USER,PASS);
you have to give the right root password .
For Windows users : psql -U postgres
You should see then the command-line interface to PostgreSQL: postgres=#
I actually tried to implement connection pooling on the django end using:
https://github.com/gmcguire/django-db-pool
but I still received this error, despite lowering the number of connections available to below the standard development DB quota of 20 open connections.
There is an article here about how to move your postgresql database to the free/cheap tier of Amazon RDS. This would allow you to set max_connections
higher. This will also allow you to pool connections at the database level using PGBouncer.
https://www.lewagon.com/blog/how-to-migrate-heroku-postgres-database-to-amazon-rds
UPDATE:
Heroku responded to my open ticket and stated that my database was improperly load balanced in their network. They said that improvements to their system should prevent similar problems in the future. Nonetheless, support manually relocated my database and performance is noticeably improved.
WITH uniq AS (
SELECT DISTINCT posts.id as post_id
FROM posts
JOIN votes ON votes.post_id = posts.id
-- GROUP BY not needed anymore
-- GROUP BY posts.id
)
SELECT COUNT(*)
FROM uniq;
Another form that works with Postgres 9.1+ is combining a Common Table Expression with the USING statement for the join.
WITH prod AS (select m_product_id, upc from m_product where upc='7094')
DELETE FROM m_productprice B
USING prod C
WHERE B.m_product_id = C.m_product_id
AND B.m_pricelist_version_id = '1000020';
All you need to do is to change the PATH variable to include the bin
directory of your PostgreSQL installation.
An explanation on how to change environment variables is here:
http://support.microsoft.com/kb/310519
http://www.computerhope.com/issues/ch000549.htm
To verify that the path is set correctly, you can use:
echo %PATH%
on the commandline.
DO
$do$
BEGIN
IF EXISTS (SELECT FROM orders) THEN
DELETE FROM orders;
ELSE
INSERT INTO orders VALUES (1,2,3);
END IF;
END
$do$
There are no procedural elements in standard SQL. The IF
statement is part of the default procedural language PL/pgSQL. You need to create a function or execute an ad-hoc statement with the DO
command.
You need a semicolon (;
) at the end of each statement in plpgsql (except for the final END
).
You need END IF;
at the end of the IF
statement.
A sub-select must be surrounded by parentheses:
IF (SELECT count(*) FROM orders) > 0 ...
Or:
IF (SELECT count(*) > 0 FROM orders) ...
This is equivalent and much faster, though:
IF EXISTS (SELECT FROM orders) ...
The additional SELECT
is not needed. This does the same, faster:
DO
$do$
BEGIN
DELETE FROM orders;
IF NOT FOUND THEN
INSERT INTO orders VALUES (1,2,3);
END IF;
END
$do$
Though unlikely, concurrent transactions writing to the same table may interfere. To be absolutely sure, write-lock the table in the same transaction before proceeding as demonstrated.
In the pgAdmin4, Right click on table select backup like this
After that into the backup dialog there is Dump options tab into that there is section queries you can select Use Insert Commands which include all insert queries as well in the backup.
You can use this:
SELECT COUNT(*) FROM (SELECT DISTINCT column_name FROM table_name) AS temp;
This is much faster than:
COUNT(DISTINCT column_name)
If INSERTS
are rare, I would avoid doing a NOT EXISTS (...)
since it emits a SELECT
on all updates. Instead, take a look at wildpeaks answer: https://dba.stackexchange.com/questions/5815/how-can-i-insert-if-key-not-exist-with-postgresql
CREATE OR REPLACE FUNCTION upsert_tableName(arg1 type, arg2 type) RETURNS VOID AS $$
DECLARE
BEGIN
UPDATE tableName SET col1 = value WHERE colX = arg1 and colY = arg2;
IF NOT FOUND THEN
INSERT INTO tableName values (value, arg1, arg2);
END IF;
END;
$$ LANGUAGE 'plpgsql';
This way Postgres will initially try to do a UPDATE
. If no rows was affected, it will fall back to emitting an INSERT
.
Here is slightly modified version. Changes are noted as code commentary.
BEGIN TRANSACTION
declare @cnt int
declare @test nvarchar(128)
-- variable to hold table name
declare @tableName nvarchar(255)
declare @cmd nvarchar(500)
-- local means the cursor name is private to this code
-- fast_forward enables some speed optimizations
declare Tests cursor local fast_forward for
SELECT COLUMN_NAME, TABLE_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME LIKE 'pct%'
AND TABLE_NAME LIKE 'TestData%'
open Tests
-- Instead of fetching twice, I rather set up no-exit loop
while 1 = 1
BEGIN
-- And then fetch
fetch next from Tests into @test, @tableName
-- And then, if no row is fetched, exit the loop
if @@fetch_status <> 0
begin
break
end
-- Quotename is needed if you ever use special characters
-- in table/column names. Spaces, reserved words etc.
-- Other changes add apostrophes at right places.
set @cmd = N'exec sp_rename '''
+ quotename(@tableName)
+ '.'
+ quotename(@test)
+ N''','''
+ RIGHT(@test,LEN(@test)-3)
+ '_Pct'''
+ N', ''column'''
print @cmd
EXEC sp_executeSQL @cmd
END
close Tests
deallocate Tests
ROLLBACK TRANSACTION
--COMMIT TRANSACTION
Try the following:
<script>
//Functions to open database and to create, insert data into tables
getSelectedRow = function(val)
{
db.transaction(function(transaction) {
transaction.executeSql('SELECT * FROM Employ where number = ?;',[parseInt(val)], selectedRowValues, errorHandler);
});
};
selectedRowValues = function(transaction,results)
{
for(var i = 0; i < results.rows.length; i++)
{
var row = results.rows.item(i);
alert(row['number']);
alert(row['name']);
}
};
</script>
You don't have access to javascript variable names in SQL, you must pass the values to the Database.
use coalesce
COALESCE(value [, ...])
The COALESCE function returns the first of its arguments that is not null. Null is returned only if all arguments are null. It is often used to substitute a default value for null values when data is retrieved for display.
Edit
Here's an example of COALESCE
with your query:
SELECT AVG( price )
FROM(
SELECT *, cume_dist() OVER ( ORDER BY price DESC ) FROM web_price_scan
WHERE listing_Type = 'AARM'
AND u_kbalikepartnumbers_id = 1000307
AND ( EXTRACT( DAY FROM ( NOW() - dateEnded ) ) ) * 24 < 48
AND COALESCE( price, 0 ) > ( SELECT AVG( COALESCE( price, 0 ) )* 0.50
FROM ( SELECT *, cume_dist() OVER ( ORDER BY price DESC )
FROM web_price_scan
WHERE listing_Type='AARM'
AND u_kbalikepartnumbers_id = 1000307
AND ( EXTRACT( DAY FROM ( NOW() - dateEnded ) ) ) * 24 < 48
) g
WHERE cume_dist < 0.50
)
AND COALESCE( price, 0 ) < ( SELECT AVG( COALESCE( price, 0 ) ) *2
FROM( SELECT *, cume_dist() OVER ( ORDER BY price desc )
FROM web_price_scan
WHERE listing_Type='AARM'
AND u_kbalikepartnumbers_id = 1000307
AND ( EXTRACT( DAY FROM ( NOW() - dateEnded ) ) ) * 24 < 48
) d
WHERE cume_dist < 0.50)
)s
HAVING COUNT(*) > 5
IMHO COALESCE
should not be use with AVG
because it modifies the value. NULL
means unknown and nothing else. It's not like using it in SUM
. In this example, if we replace AVG
by SUM
, the result is not distorted. Adding 0 to a sum doesn't hurt anyone but calculating an average with 0 for the unknown values, you don't get the real average.
In that case, I would add price IS NOT NULL
in WHERE
clause to avoid these unknown values.
This is a convention defined in RFC 2045 - Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies.
Private [subtype] values (starting with "X-") may be defined bilaterally between two cooperating agents without outside registration or standardization. Such values cannot be registered or standardized.
New standard values should be registered with IANA as described in RFC 2048.
A similar restriction applies to the top-level type. From the same source,
If another top-level type is to be used for any reason, it must be given a name starting with "X-" to indicate its non-standard status and to avoid a potential conflict with a future official name.
(Note that per RFC 2045, "[m]atching of media type and subtype is ALWAYS case-insensitive", so there's no difference between the interpretation of 'X-' and 'x-'.)
So it's fair to guess that "application/x-foo" was used before the IANA defined "application/foo". And it still might be used by folks who aren't aware of the IANA token assignment.
As Chris Hanson said MIME types are controlled by the IANA. This is detailed in RFC 2048 - Multipurpose Internet Mail Extensions (MIME) Part Four: Registration Procedures. According to RFC 3778, which is cited by the IANA as the definition for "application/pdf",
The application/pdf media type was first registered in 1993 by Paul Lindner for use by the gopher protocol; the registration was subsequently updated in 1994 by Steve Zilles.
The type "application/pdf" has been around for well over a decade. So it seems to me that wherever "application/x-pdf" has been used in new apps, the decision may not have been deliberate.
Assuming your example document is in the string variable doc
> XDocument.Parse(doc).Root.Attribute("SuperNumber")
1
This feature has been built-in into Atom editor (see: symbols-view
package), but you need to generate ctags symbols file for your project GH-9, GH-20.
To do that, install ctags
command (e.g. brew install ctags
on macOS), then:
Append, link or copy ctags-config
to your ~/.ctags
, example on macOS:
ln -vs "$(find /Applications/Atom.app -name ctags-config -print -quit)" ~/.ctags
Go to your project folder and run:
cd your/project/directory
ctags -R .
Restart Atom editor.
Alternatively you can use symbol-gen
package to generate ctags symbols file for your project based on the options found in .ctags
file. You can install it from Atom Package Manager by: apm install symbol-gen
. Then hit CMD-Alt-G to generate tags file for your project.
After following above, you can use Go To Declaration option from the context menu.
On macOS you can use also use the following keyboard shortcuts:
You can definetely build your code from Vim, that's what the :make
command does.
However, you need to go through the basics first : type vimtutor
in your terminal and follow the instructions to the end.
After you have completed it a few times, open an existing (non-important) text file and try out all the things you learned from vimtutor
: entering/leaving insert mode, undoing changes, quitting/saving, yanking/putting, moving and so on.
For a while you won't be productive at all with Vim and will probably be tempted to go back to your previous IDE/editor. Do that, but keep up with Vim a little bit every day. You'll probably be stopped by very weird and unexpected things but it will happen less and less.
In a few months you'll find yourself hitting o
, v
and i
all the time in every textfield everywhere.
Have fun!
I was looking for something and I found this post. I post this code that managed many-to-many relationships in case someone needs it.
var UserInRole = db.UsersInRoles.Include(u => u.UserProfile).Include(u => u.Roles)
.Select (m => new
{
UserName = u.UserProfile.UserName,
RoleName = u.Roles.RoleName
});
UPDATE user_account student
SET (student.student_education_facility_id) = (
SELECT teacher.education_facility_id
FROM user_account teacher
WHERE teacher.user_account_id = student.teacher_id AND teacher.user_type = 'ROLE_TEACHER'
)
WHERE student.user_type = 'ROLE_STUDENT';
Instead of if-else condition use if in both conditions. it will work that way but not sure why.
You can also use the following syntax for the strongly typed version:
<% using (Html.BeginForm<SomeController>(x=> x.SomeAction(),
FormMethod.Post,
new { enctype = "multipart/form-data" }))
{ %>
In the Swift 4.2.1 project in Xcode 10.1 you can easily add Objective-C file. Follow the steps below to bridge Objective-C file to Swift project.
Step_01: Create new Xcode project using Swift language:
File
> New
> Project
> objc
.
Step_02: In Swift project add new Objective-C file:
File
> New
> File...
> macOS
> Objective-C File
.
Step_03: If you add a new Objective-C file into Swift project at very first time, Xcode asks you:
Would you like to configure an Objective-C bridging header
?
Step_04: Select the option:
Create Bridging Header
.
Step_05: A corresponding file will be generated with a name:
Objc-Bridging-Header.h
.
Step_06: Now, you need setup Bridge file path in bridge header. In Project Navigator click on project with name objc
and then choose:
Build Settings
> Objective-C Bridging Header
> Objc-Bridging-Header.h
.
Step_07: Drag-and-drop your Objc-Bridging-Header.h
into that box to generate a file path.
Step_08: Open your Objc-Bridging-Header.h
file and import the Objective-C file which you want to use in your Swift file.
#import "SpecialObjcFile.m"
Here's a content of SpecialObjcFile.m
:
#import <Foundation/Foundation.h>
@interface Person: NSObject {
@public
bool busy;
}
@property
bool busy;
@end
Step_09: Now in your Swift file, you can use an Objective-C class:
override func viewDidLoad() {
super.viewDidLoad()
let myObjcContent = Person()
print(myObjcContent.busy)
}
An other way using mktime and not date('t') :
$dateStart= date("Y-m-d", mktime(0, 0, 0, 10, 1, 2016)); //2016-10-01
$dateEnd = date("Y-m-d", mktime(0, 0, 0, 11, 0, 2016)); //This will return the last day of october, 2016-10-31 :)
So this way it calculates either if it is 31,30 or 29
You can save a string raw as is by using FileReader.
Save the string in a blob and call readAsArrayBuffer(). Then the onload-event results an arraybuffer, which can converted in a Uint8Array. Unfortunately this call is asynchronous.
This little function will help you:
function stringToBytes(str)
{
let reader = new FileReader();
let done = () => {};
reader.onload = event =>
{
done(new Uint8Array(event.target.result), str);
};
reader.readAsArrayBuffer(new Blob([str], { type: "application/octet-stream" }));
return { done: callback => { done = callback; } };
}
Call it like this:
stringToBytes("\u{1f4a9}").done(bytes =>
{
console.log(bytes);
});
output: [240, 159, 146, 169]
explanation:
JavaScript use UTF-16 and surrogate-pairs to store unicode characters in memory. To save unicode character in raw binary byte streams an encoding is necessary. Usually and in the most case, UTF-8 is used for this. If you not use an enconding you can't save unicode character, just ASCII up to 0x7f.
FileReader.readAsArrayBuffer() uses UTF-8.
No need for the option, just make the preferred width of the last column the maximum and it will take all the extra space.
table.getColumnModel().getColumn(0).setPreferredWidth(27);
table.getColumnModel().getColumn(1).setPreferredWidth(120);
table.getColumnModel().getColumn(2).setPreferredWidth(100);
table.getColumnModel().getColumn(3).setPreferredWidth(90);
table.getColumnModel().getColumn(4).setPreferredWidth(90);
table.getColumnModel().getColumn(6).setPreferredWidth(120);
table.getColumnModel().getColumn(7).setPreferredWidth(100);
table.getColumnModel().getColumn(8).setPreferredWidth(95);
table.getColumnModel().getColumn(9).setPreferredWidth(40);
table.getColumnModel().getColumn(10).setPreferredWidth(Integer.MAX_INT);
If I understand you need following code. (passing expression lambda by parameter) The Method
public static void Method(Expression<Func<int, bool>> predicate) {
int[] number={1,2,3,4,5,6,7,8,9,10};
var newList = from x in number
.Where(predicate.Compile()) //here compile your clausuly
select x;
newList.ToList();//return a new list
}
Calling method
Method(v => v.Equals(1));
You can do the same in their class, see this is example.
public string Name {get;set;}
public static List<Class> GetList(Expression<Func<Class, bool>> predicate)
{
List<Class> c = new List<Class>();
c.Add(new Class("name1"));
c.Add(new Class("name2"));
var f = from g in c.
Where (predicate.Compile())
select g;
f.ToList();
return f;
}
Calling method
Class.GetList(c=>c.Name=="yourname");
I hope this is useful
BEWARE : Order of update columns is critical
GOOD: What I want saves existing Value of Status to PrevStatus
UPDATE Collections SET PrevStatus=Status, Status=44 WHERE ID=1487496;
BAD: Status & PrevStatus both end up as 44
UPDATE Collections SET Status=44, PrevStatus=Status WHERE ID=1487496;
You can use navigator.mimeTypes.
if (navigator.mimeTypes ["application/x-shockwave-flash"] == undefined)
$("#someDiv").show ();
For .NET Core projects and Visual Studio 2017 I was able to restore all packages to relative path by providing this configuration:
<configuration>
<config>
<add key="globalPackagesFolder" value="lib" />
</config>
...
</configuration>
Based on my experience the lib folder was created on the same level where Nuget.config was found, no matter where sln file was. I tested and the behavior is same for command line dotnet restore, and Visual Studio 2017 rebuild
Try
App.Current.Shutdown();
For me
Application.Current.Shutdown();
didn't work.
There are 3 kind of object files.
Contain machine code in a form that can be combined with other relocatable object files at link time, in order to form an executable object file.
If you have an a.c
source file, to create its object file with GCC you should run:
gcc a.c -c
The full process would be: preprocessor (cpp) would run over a.c. Its output (still source) will feed into the compiler (cc1). Its output (assembly) will feed into the assembler (as), which will produce the relocatable object file
. That file contains object code and linking (and debugging if -g
was used) metadata, and is not directly executable.
Special type of relocatable object file that can be loaded dynamically, either at load time, or at run time. Shared libraries are an example of these kinds of objects.
Contain machine code that can be directly loaded into memory (by the loader, e.g execve) and subsequently executed.
The result of running the linker over multiple relocatable object files
is an executable object file
. The linker merges all the input object files from the command line, from left-to-right, by merging all the same-type input sections (e.g. .data
) to the same-type output section. It uses symbol resolution
and relocation
.
When linking against a static library
the functions that are referenced in the input objects are copied to the final executable.
With dynamic libraries
a symbol table is created instead that will enable a dynamic linking with the library's functions/globals. Thus, the result is a partially executable object file, as it depends on the library. If the library doesn't exist, the file can no longer execute).
The linking process can be done as follows:
ld a.o -o myexecutable
The command: gcc a.c -o myexecutable
will invoke all the commands mentioned at point 1 and at point 3 (cpp -> cc1 -> as -> ld1)
1: actually is collect2, which is a wrapper over ld.
From the spec, §15.9.1.1:
A Date object contains a Number indicating a particular instant in time to within a millisecond. Such a Number is called a time value. A time value may also be NaN, indicating that the Date object does not represent a specific instant of time.
Time is measured in ECMAScript in milliseconds since 01 January, 1970 UTC. In time values leap seconds are ignored. It is assumed that there are exactly 86,400,000 milliseconds per day. ECMAScript Number values can represent all integers from –9,007,199,254,740,992 to 9,007,199,254,740,992; this range suffices to measure times to millisecond precision for any instant that is within approximately 285,616 years, either forward or backward, from 01 January, 1970 UTC.
The actual range of times supported by ECMAScript Date objects is slightly smaller: exactly –100,000,000 days to 100,000,000 days measured relative to midnight at the beginning of 01 January, 1970 UTC. This gives a range of 8,640,000,000,000,000 milliseconds to either side of 01 January, 1970 UTC.
The exact moment of midnight at the beginning of 01 January, 1970 UTC is represented by the value +0.
The third paragraph being the most relevant. Based on that paragraph, we can get the precise earliest date per spec from new Date(-8640000000000000)
, which is Tuesday, April 20th, 271,821 BCE (BCE = Before Common Era, e.g., the year -271,821).
If you want to save distinct values into a collection you could try HashSet Class. It will automatically remove the duplicate values and save your coding time. :)
I also like to build locators from up to bottom like:
//div[contains(@class,'btn-group')][./button[contains(.,'Arcade Reader')]]/button[@name='settings']
It's pretty simple, as we just search btn-group
with button[contains(.,'Arcade Reader')]
and get it's button[@name='settings']
That's just another option to build xPath locators
What is the profit of searching wrapper element: you can return it by method (example in java) and just build selenium constructions like:
getGroupByName("Arcade Reader").find("button[name='settings']");
getGroupByName("Arcade Reader").find("button[name='delete']");
or even simplify more
getGroupButton("Arcade Reader", "delete").click();
ListaServizi.Sort();
Will do that for you. It's straightforward enough with a list of strings. You need to be a little cleverer if sorting objects.
Create the ArrayList like ArrayList action
.
In JDK 1.5 or higher use ArrayList <string[]>
reference name.
In JDK 1.4 or lower use ArrayList
reference name.
Specify the access specifiers:
Then specify the reference it will be assigned in
action = new ArrayList<String[]>();
In JVM new
keyword will allocate memory in runtime for the object.
You should not assigned the value where declared, because you are asking without fixed size.
Finally you can be use the add()
method in ArrayList. Use like
action.add(new string[how much you need])
It will allocate the specific memory area in heap.
For better performance you should use:
var numItems = $('div.item').length;
Since it will only look for the div
elements in DOM
and will be quick.
Suggestion: using size()
instead of length
property means one extra step in the processing since SIZE()
uses length
property in the function definition and returns the result.
besides str.join
which is the most natural way, a possibility is to use io.StringIO
and abusing writelines
to write all elements in one go:
import io
a = ['a','b','c','d']
out = io.StringIO()
out.writelines(a)
print(out.getvalue())
prints:
abcd
When using this approach with a generator function or an iterable which isn't a tuple
or a list
, it saves the temporary list creation that join
does to allocate the right size in one go (and a list of 1-character strings is very expensive memory-wise).
If you're low in memory and you have a lazily-evaluated object as input, this approach is the best solution.
Sorry to burst your bubble but Spotify desktop client is just a Webkit-based browser. Of course it exposes specific additional functionality, but it's only able to run JS and render HTML/CSS because it has a JS engine as well as a Chromium rendering engine. This does not help you with coding a client-side web-app and deploying to multiple platforms.
What you're looking for is similar to Sencha Touch - a framework that allows for HTML5 apps to be natively deployed to iOS, Android and Blackberry devices. It basically acts as an intermediary between certain API calls and device-specific functionality available.
I have no experience with appcelerator, bit it appears to be doing exactly that - and get very favourable reviews online. You should give it a go (unless you wanted to go back to 1999 and roll with MS HTA ;)
You may use the following function to send a status change:
function header_status($statusCode) {
static $status_codes = null;
if ($status_codes === null) {
$status_codes = array (
100 => 'Continue',
101 => 'Switching Protocols',
102 => 'Processing',
200 => 'OK',
201 => 'Created',
202 => 'Accepted',
203 => 'Non-Authoritative Information',
204 => 'No Content',
205 => 'Reset Content',
206 => 'Partial Content',
207 => 'Multi-Status',
300 => 'Multiple Choices',
301 => 'Moved Permanently',
302 => 'Found',
303 => 'See Other',
304 => 'Not Modified',
305 => 'Use Proxy',
307 => 'Temporary Redirect',
400 => 'Bad Request',
401 => 'Unauthorized',
402 => 'Payment Required',
403 => 'Forbidden',
404 => 'Not Found',
405 => 'Method Not Allowed',
406 => 'Not Acceptable',
407 => 'Proxy Authentication Required',
408 => 'Request Timeout',
409 => 'Conflict',
410 => 'Gone',
411 => 'Length Required',
412 => 'Precondition Failed',
413 => 'Request Entity Too Large',
414 => 'Request-URI Too Long',
415 => 'Unsupported Media Type',
416 => 'Requested Range Not Satisfiable',
417 => 'Expectation Failed',
422 => 'Unprocessable Entity',
423 => 'Locked',
424 => 'Failed Dependency',
426 => 'Upgrade Required',
500 => 'Internal Server Error',
501 => 'Not Implemented',
502 => 'Bad Gateway',
503 => 'Service Unavailable',
504 => 'Gateway Timeout',
505 => 'HTTP Version Not Supported',
506 => 'Variant Also Negotiates',
507 => 'Insufficient Storage',
509 => 'Bandwidth Limit Exceeded',
510 => 'Not Extended'
);
}
if ($status_codes[$statusCode] !== null) {
$status_string = $statusCode . ' ' . $status_codes[$statusCode];
header($_SERVER['SERVER_PROTOCOL'] . ' ' . $status_string, true, $statusCode);
}
}
You may use it as such:
<?php
header_status(500);
if (that_happened) {
die("that happened")
}
if (something_else_happened) {
die("something else happened")
}
update_database();
header_status(200);
In general this error message means that you have tried to use indexing on a function. You can reproduce this error message with, for example
mean[1]
## Error in mean[1] : object of type 'closure' is not subsettable
mean[[1]]
## Error in mean[[1]] : object of type 'closure' is not subsettable
mean$a
## Error in mean$a : object of type 'closure' is not subsettable
The closure mentioned in the error message is (loosely) the function and the environment that stores the variables when the function is called.
In this specific case, as Joshua mentioned, you are trying to access the url
function as a variable. If you define a variable named url
, then the error goes away.
As a matter of good practise, you should usually avoid naming variables after base-R functions. (Calling variables data
is a common source of this error.)
There are several related errors for trying to subset operators or keywords.
`+`[1]
## Error in `+`[1] : object of type 'builtin' is not subsettable
`if`[1]
## Error in `if`[1] : object of type 'special' is not subsettable
If you're running into this problem in shiny
, the most likely cause is that you're trying to work with a reactive
expression without calling it as a function using parentheses.
library(shiny)
reactive_df <- reactive({
data.frame(col1 = c(1,2,3),
col2 = c(4,5,6))
})
While we often work with reactive expressions in shiny as if they were data frames, they are actually functions that return data frames (or other objects).
isolate({
print(reactive_df())
print(reactive_df()$col1)
})
col1 col2
1 1 4
2 2 5
3 3 6
[1] 1 2 3
But if we try to subset it without parentheses, then we're actually trying to index a function, and we get an error:
isolate(
reactive_df$col1
)
Error in reactive_df$col1 : object of type 'closure' is not subsettable
To log MySQL in WAMP, you will need to edit the my.ini (e.g. under wamp\bin\mysql\mysql5.6.17\my.ini)
and add to [mysqld]
:
general_log = 1
general_log_file="c:\\tmp\\mysql.log"
What about one of these?
object.methods.sort
Class.methods.sort
Goto
I have to run SQL Server Browser service into SQL Server Configuration Manager. Installation can't discover newly created service without this.
Excellent answer from above turned into a simple utility method.
private Uri buildURI(String url, Map<String, String> params) {
// build url with parameters.
Uri.Builder builder = Uri.parse(url).buildUpon();
for (Map.Entry<String, String> entry : params.entrySet()) {
builder.appendQueryParameter(entry.getKey(), entry.getValue());
}
return builder.build();
}
If you use an operating system that uses copy-on-write fork()
semantics (like any common unix), then as long as you never alter your data structure it will be available to all child processes without taking up additional memory. You will not have to do anything special (except make absolutely sure you don't alter the object).
The most efficient thing you can do for your problem would be to pack your array into an efficient array structure (using numpy
or array
), place that in shared memory, wrap it with multiprocessing.Array
, and pass that to your functions. This answer shows how to do that.
If you want a writeable shared object, then you will need to wrap it with some kind of synchronization or locking. multiprocessing
provides two methods of doing this: one using shared memory (suitable for simple values, arrays, or ctypes) or a Manager
proxy, where one process holds the memory and a manager arbitrates access to it from other processes (even over a network).
The Manager
approach can be used with arbitrary Python objects, but will be slower than the equivalent using shared memory because the objects need to be serialized/deserialized and sent between processes.
There are a wealth of parallel processing libraries and approaches available in Python. multiprocessing
is an excellent and well rounded library, but if you have special needs perhaps one of the other approaches may be better.
You can do that in this way:
ORDER BY `products`.`product_category_id` DESC ,`naam` ASC
Have a look at ORDER BY
Optimization
In addition to NateW's answer, if you have hyphens in your css class name, you should wrap that class within (single) quotes:
<th
class="initial "
v-on:click="myFilter"
v-bind:class="{ 'is-active' : isActive}"
>
<span class="wkday">M</span>
</th>
See this topic for more on the subject.
Here is my solution using a temporary hidden form.
//Create an hidden form
var form = $('<form>', {'method': 'POST', 'action': this.href}).hide();
//Add params
var params = { ...your params... };
$.each(params, function (k, v) {
form.append($('<input>', {'type': 'hidden', 'name': k, 'value': v}));
});
//Make it part of the document and submit
$('body').append(form);
form.submit();
//Clean up
form.remove();
Note that I massively use JQuery but you can do the same with native JS.
Nice question/answers.
Just want to point out that C pointers and arrays are not the same, although in this case the difference is not essential.
Consider the following declarations:
int a[10];
int* p = a;
In a.out
, the symbol a
is at an address that's the beginning of the array, and symbol p
is at an address where a pointer is stored, and the value of the pointer at that memory location is the beginning of the array.
I deal with this question at work often:
FTP feeds of inventory and pricing - Item descriptions and other text were in nvarchar when varchar worked fine. Converting these to varchar reduced file size almost in half and really helped with uploads.
The above scenario worked fine until someone put a special character in the item description (maybe trademark, can't remember)
I still do not use nvarchar every time over varchar. If there is any doubt or potential for special characters, I use nvarchar. I find I use varchar mostly when I am in 100% control of what is populating the field.
document.location
isn't a string.
You're probably wanting to use document.location.href
or document.location.pathname
instead.
This is the solution for MS Access.
The example table is called tab
, with fields id
and val
.
SELECT (SELECT last(val)
FROM tab AS temp
WHERE tab.id >= temp.id AND temp.val IS NOT NULL) AS val2, *
FROM tab;
I think others have clarified this adequately for other approaches (sed
, AWK
, etc.). However, my bash
-specific answers (tested on macOS High Sierra and CentOS 6/7) follow.
1) If OP wanted to use a search-and-replace method similar to what they originally proposed, then I would suggest using perl
for this, as follows. Notes: backslashes before parentheses for regex shouldn't be necessary, and this code line reflects how $1
is better to use than \1
with perl
substitution operator (e.g. per Perl 5 documentation).
perl -pe 's/(.*)/\t$1/' $filename > $sedTmpFile && mv $sedTmpFile $filename
2) However, as pointed out by ghostdog74, since the desired operation is actually to simply add a tab at the start of each line before changing the tmp file to the input/target file ($filename
), I would recommend perl
again but with the following modification(s):
perl -pe 's/^/\t/' $filename > $sedTmpFile && mv $sedTmpFile $filename
## OR
perl -pe $'s/^/\t/' $filename > $sedTmpFile && mv $sedTmpFile $filename
3) Of course, the tmp file is superfluous, so it's better to just do everything 'in place' (adding -i
flag) and simplify things to a more elegant one-liner with
perl -i -pe $'s/^/\t/' $filename
Update : In angular 7, they are the same as 6
In angular 6
the complete answer found in live example
/** POST: add a new hero to the database */
addHero (hero: Hero): Observable<Hero> {
return this.http.post<Hero>(this.heroesUrl, hero, httpOptions)
.pipe(
catchError(this.handleError('addHero', hero))
);
}
/** GET heroes from the server */
getHeroes (): Observable<Hero[]> {
return this.http.get<Hero[]>(this.heroesUrl)
.pipe(
catchError(this.handleError('getHeroes', []))
);
}
it's because of pipeable/lettable operators
which now angular is able to use tree-shakable
and remove unused imports and optimize the app
some rxjs functions are changed
do -> tap
catch -> catchError
switch -> switchAll
finally -> finalize
more in MIGRATION
and Import paths
For JavaScript developers, the general rule is as follows:
rxjs: Creation methods, types, schedulers and utilities
import { Observable, Subject, asapScheduler, pipe, of, from, interval, merge, fromEvent } from 'rxjs';
rxjs/operators: All pipeable operators:
import { map, filter, scan } from 'rxjs/operators';
rxjs/webSocket: The web socket subject implementation
import { webSocket } from 'rxjs/webSocket';
rxjs/ajax: The Rx ajax implementation
import { ajax } from 'rxjs/ajax';
rxjs/testing: The testing utilities
import { TestScheduler } from 'rxjs/testing';
and for backward compatability you can use rxjs-compat
You can also test them as a couple.
if (i,ii)==(5,10):
print "i is 5 and ii is 10"
int num = 10;
string str = Convert.ToString(num);
You can use typeof(T)
.
private static string BuildClause<T>(IList<T> clause)
{
Type itemType = typeof(T);
if(itemType == typeof(int) || itemType == typeof(decimal))
...
}
I am not sure, but maybe this logic would work.
var d = 10;
var prevDate = "";
var x = 0;
var oldVal = "";
var func = function (d) {
if (x == 0 && d != prevDate && prevDate == "") {
oldVal = d;
prevDate = d;
}
else if (x == 1 && prevDate != d) {
oldVal = prevDate;
prevDate = d;
}
console.log(oldVal);
x = 1;
};
/*
============================================
Try:
func(2);
func(3);
func(4);
*/
Check if scp
is installed or not on from where you want want to copy
check using which scp
If it's already installed, it will print you a path like /usr/bin/scp
Else, install scp using:
yum -y install openssh-clients
Then copy command
scp -r [email protected]:/var/www/html/database_backup/restore_fullbackup/backup_20140308-023002.sql /var/www/html/db_bkp/
Had the same problem (is not a method) with jQuery when working on autocomplete. It appeared the code was executed before the autocomplete.js was loaded. So make sure the ui.colorpicker.js is loaded before calling colorpicker.
Your ngClick
is correct; you just need the right service. $location
is what you're looking for. Check out the docs for the full details, but the solution to your specific question is this:
$location.path( '/new-page.html' );
The $location
service will add the hash (#) if it's appropriate based on your current settings and ensure no page reload occurs.
You could also do something more flexible with a directive if you so chose:
.directive( 'goClick', function ( $location ) {
return function ( scope, element, attrs ) {
var path;
attrs.$observe( 'goClick', function (val) {
path = val;
});
element.bind( 'click', function () {
scope.$apply( function () {
$location.path( path );
});
});
};
});
And then you could use it on anything:
<button go-click="/go/to/this">Click!</button>
There are many ways to improve this directive; it's merely to show what could be done. Here's a Plunker demonstrating it in action: http://plnkr.co/edit/870E3psx7NhsvJ4mNcd2?p=preview.
In dplyr 0.3 this can be easily achieved using the distinct()
method.
Here is an example:
distinct_df = df %>% distinct(field1)
You can get a vector of the distinct values with:
distinct_vector = distinct_df$field1
You can also select a subset of columns at the same time as you perform the distinct()
call, which can be cleaner to look at if you examine the data frame using head/tail/glimpse.:
distinct_df = df %>% distinct(field1) %>% select(field1)
distinct_vector = distinct_df$field1
In case you use jQuery you need to wait for $(window).load
, because the embedded SVG document might not be yet loaded at $(document).ready
$(window).load(function () {
//alert("Document loaded, including graphics and embedded documents (like SVG)");
var a = document.getElementById("alphasvg");
//get the inner DOM of alpha.svg
var svgDoc = a.contentDocument;
//get the inner element by id
var delta = svgDoc.getElementById("delta");
delta.addEventListener("mousedown", function(){ alert('hello world!')}, false);
});
I only know how to do this on Windows (but it should be similar on any OS, you will just need to find the correct location yourself - google search would help with that).
On Windows:
Go to your User Folder - on Windows 7/8 this would be:
[SYSDRIVE]:\Users\[your username]
(ex. C:\Users\JohnDoe\
)
In this folder there should be a folder called .AndroidStudioBeta
or .AndroidStudio
(notice the period at the start - so on some OSes it would be hidden).
Update
Now, Android Studio settings is at:
C:\Users\<Your User>\AppData\Roaming\Google\.AndroidStudio4.X
Delete this folder (or better yet, move it to a backup location - so you can return it if something goes wrong).
This should reset your Android Studio settings to default.
BufferedReader reader = new BufferedReader(new InputStreamReader(response.getEntity().getContent(), "UTF-8"));
String sResponse;
StringBuilder s = new StringBuilder();
while ((sResponse = reader.readLine()) != null) {
s = s.append(sResponse);
}
Gson gson = new Gson();
JSONObject jsonObject = new JSONObject(s.toString());
String link = jsonObject.getString("Result");
here is the answer for IE 8
AND IT WORKS for alpha to work in IE8 you have to use position atribute for that element like
position:relative or other.
For printing the Name column
df['Name']
I had a similar problem in a MS-Access query, and I solved it by changing my equivalent fName
to an "Expression" (as opposed to "Group By" or "Sum"). So long as all of my fields were "Expression", the Access query builder did not require any Group By
clause at the end.
Simple solution for shallow equality using ES6:
const arr1test = arr1.slice().sort()
const arr2test = arr2.slice().sort()
const equal = !arr1test.some((val, idx) => val !== arr2test[idx])
Creates shallow copies of each array and sorts them. Then uses some()
to loop through arr1test
values, checking each value against the value in arr2test
with the same index. If all values are equal, some()
returns false
, and in turn equal
evaluates to true
.
Could also use every()
, but it would have to cycle through every element in the array to satisfy a true
result, whereas some()
will bail as soon as it finds a value that is not equal:
const equal = arr1test.every((val, idx) => val === arr2test[idx])
Recommended is to use JSON.parse
There is an alternative you can do :
var myObject = eval('(' + myJSONtext + ')');
The easiest way to add a text to a JFrame:
JFrame window = new JFrame("JFrame with text");
window.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
window.setLayout(new BorderLayout());
window.add(new JLabel("Hello World"), BorderLayout.CENTER);
window.pack();
window.setVisible(true);
window.setLocationRelativeTo(null);
Think of the ?
as to mean "unknown"
. Thus, "ArrayList<? extends Object>"
is to say "an unknown type that (or as long as it)extends Object". Therefore, needful to say, arrayList.add(3)
would be putting something you know, into an unknown. I.e 'Forgetting'.
You can instantiate it after you implemented all the methods. Then the constructor will be called.
CSS provides only two values for its "list-style-position" - inside and outside. With "inside" second lines are flush with the list points not with the superior line:
In ordered lists, without intervention, if you give "list-style-position" the value "inside", the second line of a long list item will have no indent, but will go back to the left edge of the list (i.e. it will left-align with the number of the item). This is peculiar to ordered lists and doesn't happen in unordered lists.
If you instead give "list-style-position" the value "outside", the second line will have the same indent as the first line.
I had a list with a border and had this problem. With "list-style-position" set to "inside", my list didn't look like I wanted it to. But with "list-style-position" set to "outside", the numbers of the list items fell outside the box.
I solved this by simply setting a wider left margin for the whole list, which pushed the whole list toward the right, back into the position it was in before.
CSS:
ol.classname {margin:0;padding:0;}
ol.classname li {margin:0.5em 0 0 0;padding-left:0;list-style-position:outside;}
HTML:
<ol class="classname" style="margin:0 0 0 1.5em;">
$('#propertyType').on("click",function(e){
self.propertyTypeDialog = !self.propertyTypeDialog;
b = true;
e.stopPropagation();
console.log("input clicked");
});
$(document).on('click','body:not(#propertyType)',function (e) {
e.stopPropagation();
if(b == true) {
if ($(e.target).closest("#configuration").length == 0) {
b = false;
self.propertyTypeDialog = false;
console.log("outside clicked");
}
}
// console.log($(e.target).closest("#configuration").length);
});
You need to set up the remote repository on your client:
git remote add origin ssh://myserver.com/path/to/project
As an option you can use com.fasterxml.jackson.databind.ObjectMapper
to initialize a child class from parent
public class A {
int x;
int y;
}
public class B extends A {
int z;
}
ObjectMapper MAPPER = new ObjectMapper(); //it's configurable
MAPPER.configure( DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false );
MAPPER.configure( SerializationFeature.FAIL_ON_EMPTY_BEANS, false );
//Then wherever you need to initialize child from parent:
A parent = new A(x, y);
B child = MAPPER.convertValue( parent, B.class);
child.setZ(z);
You can still use any lombok
annotations on A and B if you need.
I like your question, regardless of whether it's off topic or not :P
An interesting aside; I've just completed a subject in my degree where we covered robotics and computer vision. Our project for the semester was incredibly similar to the one you describe.
We had to develop a robot that used an Xbox Kinect to detect coke bottles and cans on any orientation in a variety of lighting and environmental conditions. Our solution involved using a band pass filter on the Hue channel in combination with the hough circle transform. We were able to constrain the environment a bit (we could chose where and how to position the robot and Kinect sensor), otherwise we were going to use the SIFT or SURF transforms.
You can read about our approach on my blog post on the topic :)
I'll try and explain it as simple as possible. So there is no guarantee of the accuracy of the actual terms.
Session is where to initiate the connectivity to AWS services. E.g. following is default session that uses the default credential profile(e.g. ~/.aws/credentials, or assume your EC2 using IAM instance profile )
sqs = boto3.client('sqs')
s3 = boto3.resource('s3')
Because default session is limit to the profile or instance profile used, sometimes you need to use the custom session to override the default session configuration (e.g. region_name, endpoint_url, etc. ) e.g.
# custom resource session must use boto3.Session to do the override
my_west_session = boto3.Session(region_name = 'us-west-2')
my_east_session = boto3.Session(region_name = 'us-east-1')
backup_s3 = my_west_session.resource('s3')
video_s3 = my_east_session.resource('s3')
# you have two choices of create custom client session.
backup_s3c = my_west_session.client('s3')
video_s3c = boto3.client("s3", region_name = 'us-east-1')
Resource : This is the high-level service class recommended to be used. This allows you to tied particular AWS resources and passes it along, so you just use this abstraction than worry which target services are pointed to. As you notice from the session part, if you have a custom session, you just pass this abstract object than worrying about all custom regions,etc to pass along. Following is a complicated example E.g.
import boto3
my_west_session = boto3.Session(region_name = 'us-west-2')
my_east_session = boto3.Session(region_name = 'us-east-1')
backup_s3 = my_west_session.resource("s3")
video_s3 = my_east_session.resource("s3")
backup_bucket = backup_s3.Bucket('backupbucket')
video_bucket = video_s3.Bucket('videobucket')
# just pass the instantiated bucket object
def list_bucket_contents(bucket):
for object in bucket.objects.all():
print(object.key)
list_bucket_contents(backup_bucket)
list_bucket_contents(video_bucket)
Client is a low level class object. For each client call, you need to explicitly specify the targeting resources, the designated service target name must be pass long. You will lose the abstraction ability.
For example, if you only deal with the default session, this looks similar to boto3.resource.
import boto3
s3 = boto3.client('s3')
def list_bucket_contents(bucket_name):
for object in s3.list_objects_v2(Bucket=bucket_name) :
print(object.key)
list_bucket_contents('Mybucket')
However, if you want to list objects from a bucket in different regions, you need to specify the explicit bucket parameter required for the client.
import boto3
backup_s3 = my_west_session.client('s3',region_name = 'us-west-2')
video_s3 = my_east_session.client('s3',region_name = 'us-east-1')
# you must pass boto3.Session.client and the bucket name
def list_bucket_contents(s3session, bucket_name):
response = s3session.list_objects_v2(Bucket=bucket_name)
if 'Contents' in response:
for obj in response['Contents']:
print(obj['key'])
list_bucket_contents(backup_s3, 'backupbucket')
list_bucket_contents(video_s3 , 'videobucket')
As others have mentioned, you are probably not implementing equals() correctly.
However, you should also note that this code is considered quite inefficient, since the runtime could be the number of elements squared.
You might want to consider using a Set structure instead of a List instead, or building a Set first and then turning it into a list.
Typescript Importer does do the job for me
https://marketplace.visualstudio.com/items?itemName=pmneo.tsimporter
It automatically searches for typescript definitions inside your workspace and when you press enter it'll import it.
I encountered the same problem and solved it by
git checkout -f branch
and its specification is rather clear.
-f, --force
When switching branches, proceed even if the index or the working tree differs from HEAD. This is used to throw away local changes.
When checking out paths from the index, do not fail upon unmerged entries; instead, unmerged entries are ignored.
I like sed.
substr="foo"
nonsub="$(echo "$string" | sed "s/$substr//")"
hassub=0 ; [ "$string" != "$nonsub" ] && hassub=1
Edit, Logic:
Use sed to remove instance of substring from string
If new string differs from old string, substring exists
May be in a situation where you are creating a new mysqli object.
$MyConnection = new mysqli($hn, $un, $pw, $db);
but even after you close the object
$MyConnection->close();
if you will use print_r()
to check the contents of $MyConnection
, you will get an error as below:
Error:
mysqli Object
Warning: print_r(): Property access is not allowed yet in /path/to/program on line ..
( [affected_rows] => [client_info] => [client_version] =>.................)
in which case you can't use unlink()
because unlink()
will require a path name string but in this case $MyConnection
is an Object.
So you have another choice of setting its value to null:
$MyConnection = null;
now things go right, as you have expected. You don't have any content inside the variable $MyConnection
as well as you already cleaned up the mysqli Object.
It's a recommended practice to close the Object before setting the value of your variable to null
.
FORFILES /S /D -10 /C "cmd /c IF @isdir == TRUE rd /S /Q @path"
I could not get Blorgbeard's suggestion to work, but I was able to get it to work with RMDIR instead of RD:
FORFILES /p N:\test /S /D -10 /C "cmd /c IF @isdir == TRUE RMDIR /S /Q @path"
Since RMDIR won't delete folders that aren't empty so I also ended up using this code to delete the files that were over 10 days and then the folders that were over 10 days old.
FOR /d %%K in ("n:\test*") DO (
FOR /d %%J in ("%%K*") DO (
FORFILES /P %%J /S /M . /D -10 /C "cmd /c del @file"
)
)
FORFILES /p N:\test /S /D -10 /C "cmd /c IF @isdir == TRUE RMDIR /S /Q @path"
I used this code to purge out the sub folders in the folders within test (example n:\test\abc\123 would get purged when empty, but n:\test\abc would not get purged
You can only use aggregates for comparison in the HAVING clause:
GROUP BY ...
HAVING SUM(cash) > 500
The HAVING
clause requires you to define a GROUP BY clause.
To get the first row where the sum of all the previous cash is greater than a certain value, use:
SELECT y.id, y.cash
FROM (SELECT t.id,
t.cash,
(SELECT SUM(x.cash)
FROM TABLE x
WHERE x.id <= t.id) AS running_total
FROM TABLE t
ORDER BY t.id) y
WHERE y.running_total > 500
ORDER BY y.id
LIMIT 1
Because the aggregate function occurs in a subquery, the column alias for it can be referenced in the WHERE clause.
var mysql = require('mysql');
var pool = mysql.createPool({
host : 'yourip',
port : 'yourport',
user : 'dbusername',
password : 'dbpwd',
database : 'database schema name',
dateStrings: true,
multipleStatements: true
});
// TODO - if any pool issues need to try this link for connection management
// https://stackoverflow.com/questions/18496540/node-js-mysql-connection-pooling
module.exports = function(qry, qrytype, msg, callback) {
if(qrytype != 'S') {
console.log(qry);
}
pool.getConnection(function(err, connection) {
if(err) {
if(connection)
connection.release();
throw err;
}
// Use the connection
connection.query(qry, function (err, results, fields) {
connection.release();
if(err) {
callback('E#connection.query-Error occurred.#'+ err.sqlMessage);
return;
}
if(qrytype==='S') {
//for Select statement
// setTimeout(function() {
callback(results);
// }, 500);
} else if(qrytype==='N'){
let resarr = results[results.length-1];
let newid= '';
if(resarr.length)
newid = resarr[0]['@eid'];
callback(msg + newid);
} else if(qrytype==='U'){
//let ret = 'I#' + entity + ' updated#Updated rows count: ' + results[1].changedRows;
callback(msg);
} else if(qrytype==='D'){
//let resarr = results[1].affectedRows;
callback(msg);
}
});
connection.on('error', function (err) {
connection.release();
callback('E#connection.on-Error occurred.#'+ err.sqlMessage);
return;
});
});
}
Use this method for validating your email format. Pass email as string , it returns true if format is correct otherwise false.
/**
* validate your email address format. [email protected]
*/
public boolean emailValidator(String email)
{
Pattern pattern;
Matcher matcher;
final String EMAIL_PATTERN = "^[_A-Za-z0-9-]+(\\.[_A-Za-z0-9-]+)*@[A-Za-z0-9]+(\\.[A-Za-z0-9]+)*(\\.[A-Za-z]{2,})$";
pattern = Pattern.compile(EMAIL_PATTERN);
matcher = pattern.matcher(email);
return matcher.matches();
}
Simple and Complete Solution working for me. Supports ASP.NET MVC 4 and 5 (Supports ASP.NET 4.0, 4.5, and 4.5.1)
Step 1: Install NuGet Package by "Install-Package reCAPTCH.MVC"
Step 2: Add your Public and Private key to your web.config file in appsettings section
<appSettings>
<add key="ReCaptchaPrivateKey" value=" -- PRIVATE_KEY -- " />
<add key="ReCaptchaPublicKey" value=" -- PUBLIC KEY -- " />
</appSettings>
You can create an API key pair for your site at https://www.google.com/recaptcha/intro/index.html and click on Get reCAPTCHA at top of the page
Step 3: Modify your form to include reCaptcha
@using reCAPTCHA.MVC
@using (Html.BeginForm())
{
@Html.Recaptcha()
@Html.ValidationMessage("ReCaptcha")
<input type="submit" value="Register" />
}
Step 4: Implement the Controller Action that will handle the form submission and Captcha validation
[CaptchaValidator(
PrivateKey = "your private reCaptcha Google Key",
ErrorMessage = "Invalid input captcha.",
RequiredMessage = "The captcha field is required.")]
public ActionResult MyAction(myVM model)
{
if (ModelState.IsValid) //this will take care of captcha
{
}
}
OR
public ActionResult MyAction(myVM model, bool captchaValid)
{
if (captchaValid) //manually check for captchaValid
{
}
}
This answer is deprecated, please see @ankitjaininfo's answer below for a more modern solution
Here's how I think you make a POST request with data and a cookie using just the node http library. This example is posting JSON, set your content-type and content-length accordingly if you post different data.
// NB:- node's http client API has changed since this was written
// this code is for 0.4.x
// for 0.6.5+ see http://nodejs.org/docs/v0.6.5/api/http.html#http.request
var http = require('http');
var data = JSON.stringify({ 'important': 'data' });
var cookie = 'something=anything'
var client = http.createClient(80, 'www.example.com');
var headers = {
'Host': 'www.example.com',
'Cookie': cookie,
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(data,'utf8')
};
var request = client.request('POST', '/', headers);
// listening to the response is optional, I suppose
request.on('response', function(response) {
response.on('data', function(chunk) {
// do what you do
});
response.on('end', function() {
// do what you do
});
});
// you'd also want to listen for errors in production
request.write(data);
request.end();
What you send in the Cookie
value should really depend on what you received from the server. Wikipedia's write-up of this stuff is pretty good: http://en.wikipedia.org/wiki/HTTP_cookie#Cookie_attributes
There is a simple way available in Kotlin
var lst: List<Object> = ...
listOFStrings: ArrayList<String> = (lst!!.map { it.name })
An easy fix is to just create a new branch for that commit and checkout to it: git checkout -b <branch-name> <commit-hash>
.
In this way, all the changes you made will be saved in that branch. In case you need to clean up your master branch from leftover commits be sure to run git reset --hard master
.
With this, you will be rewriting your branches so be sure not to disturb anyone with these changes. Be sure to take a look at this article for a better illustration of detached HEAD state.
You can use this:
addedDate = datetime.now().replace(microsecond=0)
You could try using the system stored procedure sp_help_job. This returns information on the job, its steps, schedules and servers. For example
EXEC msdb.dbo.sp_help_job @Job_name = 'Your Job Name'
SQL Books Online should contain lots of information about the records it returns.
For returning information on multiple jobs, you could try querying the following system tables which hold the various bits of information on the job
Their names are fairly self-explanatory (apart from SysJobServers which hold information on when the job last run and the outcome).
Again, information on the fields can be found at MSDN. For example, check out the page for SysJobs
this is demo code but it will help
<!DOCTYPE html>
<html>
<head>
<title>APITABLE 3</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
$.ajax({
type: "GET",
url: "https://reqres.in/api/users/",
data: '$format=json',
dataType: 'json',
success: function (data) {
$.each(data.data,function(d,results){
console.log(data);
$("#apiData").append(
"<tr>"
+"<td>"+results.first_name+"</td>"
+"<td>"+results.last_name+"</td>"
+"<td>"+results.id+"</td>"
+"<td>"+results.email+"</td>"
+"<td>"+results.bentrust+"</td>"
+"</tr>" )
})
}
});
});
</script>
</head>
<body>
<table id="apiTable">
<thead>
<tr>
<th>Id</th>
<br>
<th>Email</th>
<br>
<th>Firstname</th>
<br>
<th>Lastname</th>
</tr>
</thead>
<tbody id="apiData"></tbody>
</body>
</html>
If you have installed "react-native" globally then just open terminal/command line tool and type react-native -v
you will get your answer.
And if you have installed "react-native" for a specific project then open terminal/command line tool and then navigate to your project and type react-native -v
you will get your answer.
How about:
conditions = [a, b, c]
if any(conditions) and not all(conditions):
...
Other variant:
if 1 <= sum(map(bool, conditions)) <= 2:
...
Just use
File.Copy(filepath, "\\\\192.168.1.28\\Files");
A windows fileshare exposed via a UNC path is treated as part of the file system, and has nothing to do with the web.
The credentials used will be that of the ASP.NET worker process, or any impersonation you've enabled. If you can tweak those to get it right, this can be done.
You may run into problems because you are using the IP address instead of the server name (windows trust settings prevent leaving the domain - by using IP you are hiding any domain details). If at all possible, use the server name!
If this is not on the same windows domain, and you are trying to use a different domain account, you will need to specify the username as "[domain_or_machine]\[username]"
If you need to specify explicit credentials, you'll need to look into coding an impersonation solution.
Expand your one liner into multiple lines. Then it becomes easy:
f.write(re.split("Tech ID:|Name:|Account #:",line)[-1])
parts = re.split("Tech ID:|Name:|Account #:",line)
wanted_part = parts[-1]
wanted_part_stripped = wanted_part.strip()
f.write(wanted_part_stripped)
The ODP.Net provider from oracle uses bind by position as default. To change the behavior to bind by name. Set property BindByName to true. Than you can dismiss the double definition of parameters.
using(OracleCommand cmd = con.CreateCommand()) {
...
cmd.BindByName = true;
...
}
this will work for me:
SELECT REPLACE(SUBSTRING(COLUMN_TYPE,6,(LENGTH(COLUMN_TYPE)-6)),"'","")
FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA='__TABLE_SCHEMA__'
AND TABLE_NAME='__TABLE_NAME__'
AND COLUMN_NAME='__COLUMN_NAME__'
and then
explode(',', $data)
SELECT a.image_id
FROM list a
INNER JOIN list b
ON a.image_id = b.image_id
AND b.style_id = 25
AND b.style_value = 'big'
INNER JOIN list c
ON a.image_id = c.image_id
AND c.style_id = 27
AND c.style_value = 'round'
WHERE a.style_id = 24
AND a.style_value = 'red'
Use the getWidth method in the following class:
import java.awt.*;
import java.awt.geom.*;
import java.awt.font.*;
class StringMetrics {
Font font;
FontRenderContext context;
public StringMetrics(Graphics2D g2) {
font = g2.getFont();
context = g2.getFontRenderContext();
}
Rectangle2D getBounds(String message) {
return font.getStringBounds(message, context);
}
double getWidth(String message) {
Rectangle2D bounds = getBounds(message);
return bounds.getWidth();
}
double getHeight(String message) {
Rectangle2D bounds = getBounds(message);
return bounds.getHeight();
}
}
Each column has like()
method, which can be used in query.filter()
. Given a search string, add a %
character on either side to search as a substring in both directions.
tag = request.form["tag"]
search = "%{}%".format(tag)
posts = Post.query.filter(Post.tags.like(search)).all()
First, make sure you have libcurl
(see: http://curl.haxx.se) installed. Then make sure your copy of PHP has been compiled with the --with-curl[=DIR]
flag. For more info see:
If XAMPP comes pre-compiled with cURL you may just need to enable the extension in your php.ini file (usually by removing a semicolon at the start of the line which includes the extension).
For me it was the "Start In" - I copied the values from an older server, and updated the path to the new .exe location, but I forgot to update the "start in" location - if it doesn't exist, you get this error too
Quoting @hans-passant 's comment from above, because it is valuable to debugging this issue:
Convert the error code to hex to get 0x8007010B. The 7 makes it a Windows error. Which makes 010B error code 267. "The directory name is invalid". Sure, that happens.
In python 2.6 the '%' operator performed a modulus. I don't think they changed it in 3.0.1
The modulo operator tells you the remainder of a division of two numbers.
Using parameter --force:
npm i -f
The max size for a column of type NVARCHAR(MAX)
is 2 GByte of storage.
Since NVARCHAR
uses 2 bytes per character, that's approx. 1 billion characters.
Leo Tolstoj's War and Peace is a 1'440 page book, containing about 600'000 words - so that might be 6 million characters - well rounded up. So you could stick about 166 copies of the entire War and Peace book into each NVARCHAR(MAX)
column.
Is that enough space for your needs? :-)
Perhaps a little off topic, just be informed that these kind of messages can also be seen when you are debugging your code with a breakpoint inside an async function like setTimeout
like below:
[Violation] 'setTimeout' handler took 43129ms
That number (43129ms) depends on how long you stop in your async function
If you use other any class other than Activity, you might want to do like,
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader( YourApplication.getInstance().getAssets().open("text.txt"), "UTF-8"));
Warning: this is an experimental feature that could dramatically change or even cease to exist in future releases
You can use ES7 statics:
npm install babel-preset-stage-0
And then add "stage-0"
to .babelrc presets:
{
"presets": ["es2015", "react", "stage-0"]
}
Afterwards, you go
class Component extends React.Component {
static foo = 'bar';
static baz = {a: 1, b: 2}
}
And then you use them like this:
Component.foo
Try the following way:
<input type="submit" value="Search" class="search-btn" />
<a href="javascript:;" onclick="$('.search-btn').click();">Go</a>
This is what worked for me. Combination of answers by amalBit & Melbourne Lopes
public void timerDelayRunForScroll(long time) {
Handler handler = new Handler();
handler.postDelayed(new Runnable() {
public void run() {
try {
int h1 = mListView.getHeight();
int h2 = v.getHeight();
mListView.smoothScrollToPositionFromTop(YOUR_POSITION, h1/2 - h2/2, 500);
} catch (Exception e) {}
}
}, time);
}
and then call this method like:
timerDelayRunForScroll(400);
Below is a Swift Playground example:
import UIKit
let jsonString = "{\"name\": \"John Doe\", \"phone\":123456}"
let data = jsonString.data(using: .utf8)
var jsonObject: Any
do {
jsonObject = try JSONSerialization.jsonObject(with: data!) as Any
if let obj = jsonObject as? NSDictionary {
print(obj["name"])
}
} catch {
print("error")
}
We have set compatibility mode for IE11 to resolve an issue: Settings>Compatibility View Settings>Add your site name or Check "Display intranet sites in Compatibility View" if your portal is in the intranet.
IE version 11.0.9600.16521
Worked for us, hope this helps someone.
If you're creating a framework the whole idea is to make it portable. Tying a framework to the app delegate defeats the purpose of building a framework. What is it you need the app delegate for?
In my case the error was very misleading. The problem was that PHPMyAdmin uses "ALTER TABLE" when you click on the "make unique" button instead of "ALTER IGNORE TABLE", so I had to do it manually, like in:
ALTER TABLE mytbl ADD UNIQUE (columnName);
In June 2016 Instagram made most of the functionality of their API available only to applications that have passed a review process. They still however provide JSON data through the web interface, and you can add the parameter __a=1
to a URL to only include the JSON data.
max=
while :;do
c=$(curl -s "https://www.instagram.com/username/?__a=1&max_id=$max")
jq -r '.user.media.nodes[]?|.display_src'<<<"$c"
max=$(jq -r .user.media.page_info.end_cursor<<<"$c")
jq -e .user.media.page_info.has_next_page<<<"$c">/dev/null||break
done
Edit: As mentioned in the comment by alnorth29, the max_id
parameter is now ignored. Instagram also changed the format of the response, and you need to perform additional requests to get the full-size URLs of images in the new-style posts with multiple images per post. You can now do something like this to list the full-size URLs of images on the first page of results:
c=$(curl -s "https://www.instagram.com/username/?__a=1")
jq -r '.graphql.user.edge_owner_to_timeline_media.edges[]?|.node|select(.__typename!="GraphSidecar").display_url'<<<"$c"
jq -r '.graphql.user.edge_owner_to_timeline_media.edges[]?|.node|select(.__typename=="GraphSidecar")|.shortcode'<<<"$c"|while read l;do
curl -s "https://www.instagram.com/p/$l?__a=1"|jq -r '.graphql.shortcode_media|.edge_sidecar_to_children.edges[]?.node|.display_url'
done
To make a list of the shortcodes of each post made by the user whose profile is opened in the frontmost tab in Safari, I use a script like this:
sjs(){ osascript -e'{on run{a}','tell app"safari"to do javascript a in document 1',end} -- "$1";}
while :;do
sjs 'o="";a=document.querySelectorAll(".v1Nh3 a");for(i=0;e=a[i];i++){o+=e.href+"\n"};o'>>/tmp/a
sjs 'window.scrollBy(0,window.innerHeight)'
sleep 1
done
For pasting something that is the system clipboard you can just use SHIFT - INS.
It works in Windows, but I am guessing it works well in Linux too.
We also had to develop a solution which would even work with APIs requiring authentication (see this article)
Using AngularJS in a nutshell here is how we did it:
Step 1: Create a dedicated directive
// jQuery needed, uses Bootstrap classes, adjust the path of templateUrl
app.directive('pdfDownload', function() {
return {
restrict: 'E',
templateUrl: '/path/to/pdfDownload.tpl.html',
scope: true,
link: function(scope, element, attr) {
var anchor = element.children()[0];
// When the download starts, disable the link
scope.$on('download-start', function() {
$(anchor).attr('disabled', 'disabled');
});
// When the download finishes, attach the data to the link. Enable the link and change its appearance.
scope.$on('downloaded', function(event, data) {
$(anchor).attr({
href: 'data:application/pdf;base64,' + data,
download: attr.filename
})
.removeAttr('disabled')
.text('Save')
.removeClass('btn-primary')
.addClass('btn-success');
// Also overwrite the download pdf function to do nothing.
scope.downloadPdf = function() {
};
});
},
controller: ['$scope', '$attrs', '$http', function($scope, $attrs, $http) {
$scope.downloadPdf = function() {
$scope.$emit('download-start');
$http.get($attrs.url).then(function(response) {
$scope.$emit('downloaded', response.data);
});
};
}]
});
Step 2: Create a template
<a href="" class="btn btn-primary" ng-click="downloadPdf()">Download</a>
Step 3: Use it
<pdf-download url="/some/path/to/a.pdf" filename="my-awesome-pdf"></pdf-download>
This will render a blue button. When clicked, a PDF will be downloaded (Caution: the backend has to deliver the PDF in Base64 encoding!) and put into the href. The button turns green and switches the text to Save. The user can click again and will be presented with a standard download file dialog for the file my-awesome.pdf.
Our example uses PDF files, but apparently you could provide any binary format given it's properly encoded.
In short, yes. But there are times when you might favor one vs. the other. Google "case switch vs. if else". There are some discussions already on SO too. Also, here is a good video that talks about it in the context of MATLAB:
http://blogs.mathworks.com/pick/2008/01/02/matlab-basics-switch-case-vs-if-elseif/
Personally, when I have 3 or more cases, I usually just go with case/switch.
recreate()
(as mentioned by TPReal) will only restart current activity, but the previous activities will still be in back stack and theme will not be applied to them.
So, another solution for this problem is to recreate the task stack completely, like this:
TaskStackBuilder.create(getActivity())
.addNextIntent(new Intent(getActivity(), MainActivity.class))
.addNextIntent(getActivity().getIntent())
.startActivities();
EDIT:
Just put the code above after you perform changing of theme on the UI or somewhere else. All your activities should have method setTheme()
called before onCreate()
, probably in some parent activity. It is also a normal approach to store the theme chosen in SharedPreferences
, read it and then set using setTheme()
method.
You can map JSON to specific class in Groovy using as
operator:
import groovy.json.JsonSlurper
String json = '''
{
"name": "John",
"age": 20
}
'''
def person = new JsonSlurper().parseText(json) as Person
with(person) {
assert name == 'John'
assert age == 20
}
Most answers here do not realize the problem, the user wants to have a COPY of the elements from first list to the second list, destination list elements are new objects and not reference to the elements of original list. (means changing an element of second list should not change values for corresponding element of source list.) For the mutable objects we cannot use ArrayList(Collection) constructor because it will simple refer to the original list element and will not copy. You need to have a list cloner for each object when copying.
You can use these commands :
<? system('whoami');?>
or
<? passthru('whoami');?>
or
<? print exec('whoami');?>
or
<? print shell_exec('whoami');?>
Be aware, the get_current_user()
returns the name of the owner of the current PHP script !
If you are sure that this change is suitable for the environment you're working in: set the FK conditions on the secondary tables to UPDATE CASCADING.
For example, if using SSMS as GUI:
When you then update a value in the PK column in your primary table, the FK references in the other tables will be updated to point at the new value, preserving data integrity.
Each answer is 5 6 years old, so bit outdated or depreciated. Let's talk in 2020.
simple-node-logger is simple multi-level logger for console, file, and rolling file appenders. Features include:
levels: trace, debug, info, warn, error and fatal levels (plus all and off)
flexible appender/formatters with default to HH:mm:ss.SSS LEVEL message add appenders to send output to console, file, rolling file, etc
change log levels on the fly
domain and category columns
overridable format methods in base appender
stats that track counts of all log statements including warn, error, etc
You can easily use it in any nodejs web application:
// create a stdout console logger
const log = require('simple-node-logger').createSimpleLogger();
or
// create a stdout and file logger
const log = require('simple-node-logger').createSimpleLogger('project.log');
or
// create a custom timestamp format for log statements
const SimpleNodeLogger = require('simple-node-logger'),
opts = {
logFilePath:'mylogfile.log',
timestampFormat:'YYYY-MM-DD HH:mm:ss.SSS'
},
log = SimpleNodeLogger.createSimpleLogger( opts );
or
// create a file only file logger
const log = require('simple-node-logger').createSimpleFileLogger('project.log');
or
// create a rolling file logger based on date/time that fires process events
const opts = {
errorEventName:'error',
logDirectory:'/mylogfiles', // NOTE: folder must exist and be writable...
fileNamePattern:'roll-<DATE>.log',
dateFormat:'YYYY.MM.DD'
};
const log = require('simple-node-logger').createRollingFileLogger( opts );
Messages can be logged by
log.info('this is logged info message')
log.warn('this is logged warn message')//etc..
PLUS POINT: It can send logs to console or socket. You can also append to log levels.
This is the most effective and easy way to handle logs functionality.
I believe you need to specify "Option Infer On" for this to work.
Option Infer allows the compiler to make a guess at what is being represented by your code, thus it will guess that {"stuff"} is an array of strings. With "Option Infer Off", {"stuff"} won't have any type assigned to it, ever, and so it will always fail, without a type specifier.
Option Infer is, I think On by default in new projects, but Off by default when you migrate from earlier frameworks up to 3.5.
Opinion incoming:
Also, you mention that you've got "Option Explicit Off". Please don't do this.
Setting "Option Explicit Off" means that you don't ever have to declare variables. This means that the following code will silently and invisibly create the variable "Y":
Dim X as Integer
Y = 3
This is horrible, mad, and wrong. It creates variables when you make typos. I keep hoping that they'll remove it from the language.
Here's one solution that I'd use:
var div = '<div id="yourId" class="yourClass" yourAttribute="yourAttributeValue">blah</div>';
If you wanted the attribute and/or attribute values to be based on variables:
var id = "hello";
var classAttr = "class";
var div = '<div id='+id+' '+classAttr+'="world" >Blah</div>';
Then, to append to the body:
document.getElementsByTagName("body").innerHTML = div;
Easy as pie.
In my case the mentioned "duplicate entry" error arised after settingmultiDexEnable=true
in the build.gradle.
In order to fully resolve the error first of all have a look at Configure Apps with Over 64K Methods (espescially "Configuring Your App for Multidex with Gradle").
Furthermore search for occurences of the class that causes the "duplicate entry" error using ctrl+n in Android Studio. Determine the module and dependency that contains the duplicate and exclude it from build, e.g.:
compile ('org.roboguice:roboguice:2.0') {
exclude module: 'support-v4'
}
I had to try multiple module labels till it worked. Excluding "support-v4" solves issues related to "java.util.zip.ZipException: duplicate entry: android/support/v4/*"
From: https://stackoverflow.com/a/29989542/4123403
This did the trick for me.
If your tiles are not in your bundle, either copied from the bundle or downloaded from the internet you can get the directory like this
NSString *documentdir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
NSString *tileDirectory = [documentdir stringByAppendingPathComponent:@"xxxx/Tiles"];
NSLog(@"Tile Directory: %@", tileDirectory);
Simple as:
DateTime.Now.ToString("MM.dd.yyyy");
link to MSDN on ALL formatting options for DateTime.ToString() method
See the following snippet, is this what you want?
body {
background: silver;
padding: 0 10px;
}
#content:after {
height: 10px;
display: block;
width: 100px;
background: #808080;
border-right: 1px white;
content: '';
}
#footer:before {
display: block;
content: '';
background: silver;
height: 10px;
margin-top: -20px;
margin-left: 101px;
}
#content {
background: white;
}
#footer {
padding-top: 10px;
background: #404040;
}
p {
padding: 100px;
text-align: center;
}
#footer p {
color: white;
}
_x000D_
<body>
<div id="content"><p>#content</p></div>
<div id="footer"><p>#footer</p></div>
</body>
_x000D_
For anyone else stuck with this, it's probably not working because the ajax call is interpreting your returned data as text - i.e. it's not yet a JSON object.
You can convert it to a JSON object by manually using the parseJSON command or simply adding the dataType: 'json' property to your ajax call. e.g.
jQuery.ajax({
type: 'POST',
url: '<?php echo admin_url('admin-ajax.php'); ?>',
data: data,
dataType: 'json', // ** ensure you add this line **
success: function(data) {
jQuery.each(data, function(index, item) {
//now you can access properties using dot notation
});
},
error: function(XMLHttpRequest, textStatus, errorThrown) {
alert("some error");
}
});
As mobrule indicates, you could use the following instead for a small savings:
if (defined $name && $name ne '') {
# do something with $name
}
You could ditch the defined check and get something even shorter, e.g.:
if ($name ne '') {
# do something with $name
}
But in the case where $name
is not defined, although the logic flow will work just as intended, if you are using warnings
(and you should be), then you'll get the following admonishment:
Use of uninitialized value in string ne
So, if there's a chance that $name
might not be defined, you really do need to check for definedness first and foremost in order to avoid that warning. As Sinan Ünür points out, you can use Scalar::MoreUtils to get code that does exactly that (checks for definedness, then checks for zero length) out of the box, via the empty()
method:
use Scalar::MoreUtils qw(empty);
if(not empty($name)) {
# do something with $name
}
Since there are dozens of ways to do it, here is another one:
tree -d -L 1 -i --noreport
There are tcpdump filters for HTTP GET & HTTP POST (or for both plus message body):
Run man tcpdump | less -Ip examples
to see some examples
Here’s a tcpdump filter for HTTP GET (GET
= 0x47
, 0x45
, 0x54
, 0x20
):
sudo tcpdump -s 0 -A 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420'
Here’s a tcpdump filter for HTTP POST (POST
= 0x50
, 0x4f
, 0x53
, 0x54
):
sudo tcpdump -s 0 -A 'tcp dst port 80 and (tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504f5354)'
Monitor HTTP traffic including request and response headers and message body (source):
tcpdump -A -s 0 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
tcpdump -X -s 0 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
For more information on the bit-twiddling in the TCP header see: String-Matching Capture Filter Generator (link to Sake Blok's explanation).
The only real difference between a soft reference and a weak reference is that
the garbage collector uses algorithms to decide whether or not to reclaim a softly reachable object, but always reclaims a weakly reachable object.
var mydate = "2017-06-28T00:00:00";
var weekDayName = moment(mydate).format('ddd');
console.log(weekDayName);
Result: Wed
var mydate = "2017-06-28T00:00:00";
var weekDayName = moment(mydate).format('dddd');
console.log(weekDayName);
Result: Wednesday
Update your Eclipse Android development tools. It worked for me.
I think new is evil, not because if you forget to use it by mistake it might cause problems but because it screws up the inheritance chain, making the language tougher to understand.
JavaScript is prototype-based object-oriented. Hence every object MUST be created from another object like so var newObj=Object.create(oldObj)
. Here oldObj is called the prototype of newObj (hence "prototype-based"). This implies that if a property is not found in newObj then it will be searched in oldObj. newObj by default will thus be an empty object but due to its prototype chain it appears to have all the values of oldObj.
On the other hand if you do var newObj=new oldObj()
, the prototype of newObj is oldObj.prototype, which is unnecessarily difficult to understand.
The trick is to use
Object.create=function(proto){
var F = function(){};
F.prototype = proto;
var instance = new F();
return instance;
};
It is inside this function and only here that new should be used. After this simply use the Object.create() method. The method resolves the prototype problem.
The difference between Slice() and Splice() javascript build-in functions is, Slice returns removed item but did not change the original array ; like,
// (original Array)
let array=[1,2,3,4,5]
let index= array.indexOf(4)
// index=3
let result=array.slice(index)
// result=4
// after slicing=> array =[1,2,3,4,5] (same as original array)
but in splice() case it affects original array; like,
// (original Array)
let array=[1,2,3,4,5]
let index= array.indexOf(4)
// index=3
let result=array.splice(index)
// result=4
// after splicing array =[1,2,3,5] (splicing affects original array)
Should declare logger like this:
private final static Logger LOGGER = Logger.getLogger(MyClass.class.getName());
so if you refactor your class name it follows.
I wrote an article about java logger with examples here.
The relevant documentation can be found here:
http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
Start your program with following parameters:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9010
-Dcom.sun.management.jmxremote.rmi.port=9010
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
For instance like this:
java -Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=9010 \
-Dcom.sun.management.jmxremote.local.only=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-jar Notepad.jar
-Dcom.sun.management.jmxremote.local.only=false
is not necessarily required
but without it, it doesn't work on Ubuntu. The error would be something like
this:
01 Oct 2008 2:16:22 PM sun.rmi.transport. customer .TCPTransport$AcceptLoop executeAcceptLoop
WARNING: RMI TCP Accept-0: accept loop for ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=37278] throws
java.io.IOException: The server sockets created using the LocalRMIServerSocketFactory only accept connections from clients running on the host where the RMI remote objects have been exported.
at sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:89)
at sun.rmi.transport. customer .TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:387)
at sun.rmi.transport. customer .TCPTransport$AcceptLoop.run(TCPTransport.java:359)
at java.lang.Thread.run(Thread.java:636)
see http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6754672
Also be careful with -Dcom.sun.management.jmxremote.authenticate=false
which
makes access available for anyone, but if you only use it to track the JVM on
your local machine it doesn't matter.
Update:
In some cases I was not able to reach the server. This was then fixed if I set this parameter as well: -Djava.rmi.server.hostname=127.0.0.1
If you can't follow Max's suggestion of installing Blat (or any other utility) on your server, then perhaps your server already has software installed that can send emails.
I know that both Oracle and SqlServer have the capability to send email. You might have to work with your DBA to get that feature enabled and/or get the privilege to use it. Of course I can see how that might present its own set of problems and red tape. Assuming you can access the feature, it is fairly simple to have a batch file login to a database and send mail.
A batch file can easily run a VBScript via CSCRIPT. A quick google search finds many links showing how to send email with VBScript. The first one I happened to look at was http://www.activexperts.com/activmonitor/windowsmanagement/adminscripts/enterprise/mail/. It looks straight forward.
i = 0;
do
{
for (int j = 0; j < 1000; j++) // by the way, your code uses i++ here!
{
if (condition)
{
break;
}
}
++i;
} while ((i < 1000) && !condition);
extension String {
func floatValue() -> Float? {
return Float(self)
}
}
An alternative method to using upgrade
from the default CPAN shell is to use cpanminus
and cpan-outdated
.
These are so easy and nimble to use that I hardly ever go back to CPAN shell. To upgrade all of your modules in one go, the command is:
cpan-outdated -p | cpanm
I recommend you install cpanminus
like the docs describe:
curl -L https://cpanmin.us | perl - App::cpanminus
And then install cpan-outdated
along with all other CPAN modules using cpanm
:
cpanm App::cpanoutdated
BTW: If you are using perlbrew
then you will need to repeat this for every Perl you have installed under it.
You can find out more about cpanminus
and cpan-outdated
at the Github repos here:
I would recommend computing this on the client side too. Less work for the server.
The following is the version that I use (from Zach Leatherman)
/*
* Javascript Humane Dates
* Copyright (c) 2008 Dean Landolt (deanlandolt.com)
* Re-write by Zach Leatherman (zachleat.com)
*
* Adopted from the John Resig's pretty.js
* at http://ejohn.org/blog/javascript-pretty-date
* and henrah's proposed modification
* at http://ejohn.org/blog/javascript-pretty-date/#comment-297458
*
* Licensed under the MIT license.
*/
function humane_date(date_str){
var time_formats = [
[60, 'just now'],
[90, '1 minute'], // 60*1.5
[3600, 'minutes', 60], // 60*60, 60
[5400, '1 hour'], // 60*60*1.5
[86400, 'hours', 3600], // 60*60*24, 60*60
[129600, '1 day'], // 60*60*24*1.5
[604800, 'days', 86400], // 60*60*24*7, 60*60*24
[907200, '1 week'], // 60*60*24*7*1.5
[2628000, 'weeks', 604800], // 60*60*24*(365/12), 60*60*24*7
[3942000, '1 month'], // 60*60*24*(365/12)*1.5
[31536000, 'months', 2628000], // 60*60*24*365, 60*60*24*(365/12)
[47304000, '1 year'], // 60*60*24*365*1.5
[3153600000, 'years', 31536000], // 60*60*24*365*100, 60*60*24*365
[4730400000, '1 century'] // 60*60*24*365*100*1.5
];
var time = ('' + date_str).replace(/-/g,"/").replace(/[TZ]/g," "),
dt = new Date,
seconds = ((dt - new Date(time) + (dt.getTimezoneOffset() * 60000)) / 1000),
token = ' ago',
i = 0,
format;
if (seconds < 0) {
seconds = Math.abs(seconds);
token = '';
}
while (format = time_formats[i++]) {
if (seconds < format[0]) {
if (format.length == 2) {
return format[1] + (i > 1 ? token : ''); // Conditional so we don't return Just Now Ago
} else {
return Math.round(seconds / format[2]) + ' ' + format[1] + (i > 1 ? token : '');
}
}
}
// overflow for centuries
if(seconds > 4730400000)
return Math.round(seconds / 4730400000) + ' centuries' + token;
return date_str;
};
if(typeof jQuery != 'undefined') {
jQuery.fn.humane_dates = function(){
return this.each(function(){
var date = humane_date(this.title);
if(date && jQuery(this).text() != date) // don't modify the dom if we don't have to
jQuery(this).text(date);
});
};
}
Although maven exec does the trick here, I found it pretty poor for a real test. While waiting for maven shell, and hoping this could help others, I finally came out to this repo mvnexec
Clone it, and symlink the script somewhere in your path. I use ~/bin/mvnexec
, as I have ~/bin
in my path. I think mvnexec is a good name for the script, but is up to you to change the symlink...
Launch it from the root of your project, where you can see src and target dirs.
The script search for classes with main method, offering a select to choose one (Example with mavenized JMeld project)
$ mvnexec
1) org.jmeld.ui.JMeldComponent
2) org.jmeld.ui.text.FileDocument
3) org.jmeld.JMeld
4) org.jmeld.util.UIDefaultsPrint
5) org.jmeld.util.PrintProperties
6) org.jmeld.util.file.DirectoryDiff
7) org.jmeld.util.file.VersionControlDiff
8) org.jmeld.vc.svn.InfoCmd
9) org.jmeld.vc.svn.DiffCmd
10) org.jmeld.vc.svn.BlameCmd
11) org.jmeld.vc.svn.LogCmd
12) org.jmeld.vc.svn.CatCmd
13) org.jmeld.vc.svn.StatusCmd
14) org.jmeld.vc.git.StatusCmd
15) org.jmeld.vc.hg.StatusCmd
16) org.jmeld.vc.bzr.StatusCmd
17) org.jmeld.Main
18) org.apache.commons.jrcs.tools.JDiff
#?
If one is selected (typing number), you are prompt for arguments (you can avoid with mvnexec -P
)
By default it compiles project every run. but you can avoid that using mvnexec -B
It allows to search only in test classes -M
or --no-main
, or only in main classes -T
or --no-test
. also has a filter by name option -f <whatever>
Hope this could save you some time, for me it does.
The PHP manual explains both quite well:
http://php.net/manual/en/reserved.variables.server.php # REQUEST_URI
http://php.net/manual/en/reserved.variables.get.php # for the $_GET["q"] variable
If you don't mind picking the same item again at some other time:
$items[rand(0, count($items) - 1)];
Please use the below code :
try {
String uriText =
"mailto:emailid" +
"?subject=" + Uri.encode("Feedback for app") +
"&body=" + Uri.encode(deviceInfo);
Uri uri = Uri.parse(uriText);
Intent emailIntent = new Intent(Intent.ACTION_SENDTO);
emailIntent.setData(uri);
startActivity(Intent.createChooser(emailIntent, "Send email using..."));
} catch (android.content.ActivityNotFoundException ex) {
Toast.makeText(ContactUsActivity.this, "No email clients installed.", Toast.LENGTH_SHORT).show();
}
I had this problem and did not find any solution which I liked, so I created a python wrapper for mpg321: mpyg321.
You would need to have mpg321 installed on your computer, and then do pip install mpyg321
.
The usage is pretty simple:
from mpyg321.mpyg321 import MPyg321Player
from time import sleep
player = MPyg321Player() # instanciate the player
player.play_song("sample.mp3") # play a song
sleep(5)
player.pause() # pause playing
sleep(3)
player.resume() # resume playing
sleep(5)
player.stop() # stop playing
player.quit() # quit the player
You can also define callbacks for several events (music paused by user, end of song...).
You need select privileges on All_users view
There is NNCron for Windows. IT can schedule jobs to be run periodically.
In my case the solution was as simple as starting up my virtual environment like so:
$ venv/scripts/activate
It turns out I am still fresh to Python :)
Like other people have said, generally Python strings are supposed to be immutable.
However, if you are using CPython, the implementation at python.org, it is possible to use ctypes to modify the string structure in memory.
Here is an example where I use the technique to clear a string.
Mark data as sensitive in python
I mention this for the sake of completeness, and this should be your last resort as it is hackish.
let us see the below source code. Here fragment name is DirectoryOfEbooks. After completion of the background task, i am the replacing the frame with current fragment. so the fragment gets refreshed and reloads its data
import android.app.ProgressDialog;
import android.content.DialogInterface;
import android.database.Cursor;
import android.database.sqlite.SQLiteDatabase;
import android.os.AsyncTask;
import android.os.Bundle;
import android.support.v4.app.Fragment;
import android.support.v4.app.FragmentTransaction;
import android.support.v4.view.MenuItemCompat;
import android.support.v7.app.AlertDialog;
import android.support.v7.widget.DefaultItemAnimator;
import android.support.v7.widget.GridLayoutManager;
import android.support.v7.widget.LinearLayoutManager;
import android.support.v7.widget.RecyclerView;
import android.support.v7.widget.SearchView;
import android.view.LayoutInflater;
import android.view.Menu;
import android.view.MenuInflater;
import android.view.MenuItem;
import android.view.View;
import android.view.ViewGroup;
import android.widget.TextView;
import android.widget.Toast;
import com.github.mikephil.charting.data.LineRadarDataSet;
import java.util.ArrayList;
import java.util.List;
/**
* A simple {@link Fragment} subclass.
*/
public class DirectoryOfEbooks extends Fragment {
RecyclerView recyclerView;
branchesAdapter adapter;
LinearLayoutManager linearLayoutManager;
Cursor c;
FragmentTransaction fragmentTransaction;
SQLiteDatabase db;
List<branch_sync> directoryarraylist;
public DirectoryOfEbooks() {
// Required empty public constructor
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View view = inflater.inflate(R.layout.fragment_directory_of_ebooks, container, false);
directoryarraylist = new ArrayList<>();
db = getActivity().openOrCreateDatabase("notify", android.content.Context.MODE_PRIVATE, null);
c = db.rawQuery("select * FROM branch; ", null);
if (c.getCount() != 0) {
c.moveToFirst();
while (true) {
//String ISBN = c.getString(c.getColumnIndex("ISBN"));
String branch = c.getString(c.getColumnIndex("branch"));
branch_sync branchSync = new branch_sync(branch);
directoryarraylist.add(branchSync);
if (c.isLast())
break;
else
c.moveToNext();
}
recyclerView = (RecyclerView) view.findViewById(R.id.directoryOfEbooks);
adapter = new branchesAdapter(directoryarraylist, this.getContext());
adapter.setHasStableIds(true);
recyclerView.setItemAnimator(new DefaultItemAnimator());
System.out.println("ebooks");
recyclerView.setHasFixedSize(true);
linearLayoutManager = new LinearLayoutManager(this.getContext());
recyclerView.setLayoutManager(linearLayoutManager);
recyclerView.setAdapter(adapter);
System.out.println(adapter.getItemCount()+"adpater count");
}
// Inflate the layout for this fragment
return view;
}
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//setContentView(R.layout.fragment_books);
setHasOptionsMenu(true);
}
public void onPrepareOptionsMenu(Menu menu) {
MenuInflater inflater = getActivity().getMenuInflater();
inflater.inflate(R.menu.refresh, menu);
MenuItem menuItem = menu.findItem(R.id.refresh1);
menuItem.setVisible(true);
}
public boolean onOptionsItemSelected(MenuItem item) {
if (item.getItemId() == R.id.refresh1) {
new AlertDialog.Builder(getContext()).setMessage("Refresh takes more than a Minute").setPositiveButton("Refresh Now", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
new refreshebooks().execute();
}
}).setNegativeButton("Refresh Later", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
}
}).setCancelable(false).show();
}
return super.onOptionsItemSelected(item);
}
public class refreshebooks extends AsyncTask<String,String,String>{
ProgressDialog progressDialog;
@Override
protected void onPreExecute() {
super.onPreExecute();
progressDialog=new ProgressDialog(getContext());
progressDialog.setMessage("\tRefreshing Ebooks .....");
progressDialog.setCancelable(false);
progressDialog.show();
}
@Override
protected String doInBackground(String... params) {
Ebooksync syncEbooks=new Ebooksync();
String status=syncEbooks.syncdata(getContext());
return status;
}
@Override
protected void onPostExecute(String s) {
super.onPostExecute(s);
if(s.equals("error")){
progressDialog.dismiss();
Toast.makeText(getContext(),"Refresh Failed",Toast.LENGTH_SHORT).show();
}
else{
fragmentTransaction = getActivity().getSupportFragmentManager().beginTransaction();
fragmentTransaction.replace(R.id.mainframe, new DirectoryOfEbooks());
fragmentTransaction.commit();
progressDialog.dismiss();
adapter.notifyDataSetChanged();
Toast.makeText(getContext(),"Refresh Successfull",Toast.LENGTH_SHORT).show();
}
}
}
}
For me.. the problem was that the anchor needs a title, and that was missing!
If you want to fire the event only on changes of your input use:
$('.s').bind('input', function(){
console.log("search!");
doSearch();
});
Use npm
list-contents module. It reads the contents and sub-contents of the given directory and returns the list of files' and folders' paths.
const list = require('list-contents');
list("./dist",(o)=>{
if(o.error) throw o.error;
console.log('Folders: ', o.dirs);
console.log('Files: ', o.files);
});
This is one of the fastest solutions i use to quickly see web pages
sudo npm install ripple-emulator -g
From then on just enter the directory of your html files and run
ripple emulate
then change the device to Nexus 7 landscape.
You can loop through all directories including hidden directrories (beginning with a dot) with:
for file in */ .*/ ; do echo "$file is a directory"; done
note: using the list */ .*/
works in zsh only if there exist at least one hidden directory in the folder. In bash it will show also .
and ..
Another possibility for bash to include hidden directories would be to use:
shopt -s dotglob;
for file in */ ; do echo "$file is a directory"; done
If you want to exclude symlinks:
for file in */ ; do
if [[ -d "$file" && ! -L "$file" ]]; then
echo "$file is a directory";
fi;
done
To output only the trailing directory name (A,B,C as questioned) in each solution use this within the loops:
file="${file%/}" # strip trailing slash
file="${file##*/}" # strip path and leading slash
echo "$file is the directoryname without slashes"
mkdir /tmp/A /tmp/B /tmp/C "/tmp/ dir with spaces"
for file in /tmp/*/ ; do file="${file%/}"; echo "${file##*/}"; done
The simplest way to extract data from a DataTable
when you have multiple data types (not just strings) is to use the Field<T>
extension method available in the System.Data.DataSetExtensions
assembly.
var id = row.Field<int>("ID"); // extract and parse int
var name = row.Field<string>("Name"); // extract string
From MSDN, the Field<T>
method:
Provides strongly-typed access to each of the column values in the DataRow.
This means that when you specify the type it will validate and unbox the object.
For example:
// iterate over the rows of the datatable
foreach (var row in table.AsEnumerable()) // AsEnumerable() returns IEnumerable<DataRow>
{
var id = row.Field<int>("ID"); // int
var name = row.Field<string>("Name"); // string
var orderValue = row.Field<decimal>("OrderValue"); // decimal
var interestRate = row.Field<double>("InterestRate"); // double
var isActive = row.Field<bool>("Active"); // bool
var orderDate = row.Field<DateTime>("OrderDate"); // DateTime
}
It also supports nullable types:
DateTime? date = row.Field<DateTime?>("DateColumn");
This can simplify extracting data from DataTable
as it removes the need to explicitly convert or parse the object into the correct types.
Just make sure you have the .htaccess
in your project's public directory
If you don't have the file then create one and paste the below code in your .htaccess
file.
Code:-
<IfModule mod_rewrite.c>
<IfModule mod_negotiation.c>
Options -MultiViews
</IfModule>
RewriteEngine On
# Redirect Trailing Slashes If Not A Folder...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)/$ /$1 [L,R=301]
# Handle Front Controller...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
</IfModule>
Use MySQL's FIELD()
function:
SELECT name, description, ...
FROM ...
WHERE id IN([ids, any order])
ORDER BY FIELD(id, [ids in order])
FIELD()
will return the index of the first parameter that is equal to the first parameter (other than the first parameter itself).
FIELD('a', 'a', 'b', 'c')
will return 1
FIELD('a', 'c', 'b', 'a')
will return 3
This will do exactly what you want if you paste the ids into the IN()
clause and the FIELD()
function in the same order.
If you could show the actual page, we, possibly, could help you better.
If you want to trigger the button only after the iframe is loaded, you might want to check if it has been loaded or use the iframe.onload:
<iframe .... onload='buttonWhatever(); '></iframe>
<script type="text/javascript">
function buttonWhatever() {
$("#<%=Button1.ClientID%>").click(function (event) {
$('#<%=TextBox1.ClientID%>').change(function () {
$('#various3').attr('href', $(this).val());
});
$("#<%=Button2.ClientID%>").click();
});
function showStickySuccessToast() {
$().toastmessage('showToast', {
text: 'Finished Processing!',
sticky: false,
position: 'middle-center',
type: 'success',
closeText: '',
close: function () { }
});
}
}
</script>
If someone is having problem with finding CURL in the list in setup.exe (Cygwin package manager) then trying downloading 64bit version of this setup. Worked for me.
I would reconsider trying to find out if a file exists. Instead, you should try to open it (in Standard C or C++) in the same mode you intend to use it. What use is knowing that the file exists if, say, it isn't writable when you need to use it?
This should work:
DateTime.Now.ToString("hh.mm.ss.ffffff");
If you don't need it to be displayed and just need to know the time difference, well don't convert it to a String. Just leave it as, DateTime.Now();
And use TimeSpan
to know the difference between time intervals:
Example
DateTime start;
TimeSpan time;
start = DateTime.Now;
//Do something here
time = DateTime.Now - start;
label1.Text = String.Format("{0}.{1}", time.Seconds, time.Milliseconds.ToString().PadLeft(3, '0'));
If you want the object_name prefixed to any ID you should call form helpers on the form object:
- form_for(@message) do |f|
= f.label :email
This also makes sure any submitted data is stored in memory should there be any validation errors etc.
If you can't call the form helper method on the form object, for example if you're using a tag helper (radio_button_tag etc.) you can interpolate the name using:
= radio_button_tag "#{f.object_name}[email]", @message.email
In this case you'd need to specify the value manually to preserve any submissions.
Just tried this recently and when trying to select the JDK... /System/Library/Java/JavaVirtualMachines/
appears as empty when opening&selecting through IntelliJ. Therefore i couldn't select the JDK...
I've found that to workaround this, when the finder windows open (pressing [+] JDK) just use the shortcut Shift + CMD + G to specify the path. (/System/Library/Java/JavaVirtualMachines/1.6.0.jdk
in my case)
And voila, IntelliJ can find everything from that point on.
I had a similar problem but it had to do with the structure and class of the object. I would check how dih_y2$MemberID
is formatted.
This seems to work easily.
Public Function LoginAsTech(ByVal UserID As String, ByVal Pass As String) As Boolean
Dim MyDoc As New mshtml.HTMLDocument
Dim DocElements As mshtml.IHTMLElementCollection = Nothing
Dim LoginForm As mshtml.HTMLFormElement = Nothing
ASPComplete = 0
WB.Navigate(VitecLoginURI)
BrowserLoop()
MyDoc = WB.Document.DomDocument
DocElements = MyDoc.getElementsByTagName("input")
For Each i As mshtml.IHTMLElement In DocElements
Select Case i.name
Case "seLogin$UserName"
i.value = UserID
Case "seLogin$Password"
i.value = Pass
Case Else
Exit Select
End Select
frmServiceCalls.txtOut.Text &= i.name & " : " & i.value & " : " & i.type & vbCrLf
Next i
'Old Method for Clicking submit
'WB.Document.Forms("form1").InvokeMember("submit")
'Better Method to click submit
LoginForm = MyDoc.forms.item("form1")
LoginForm.item("seLogin$LoginButton").click()
ASPComplete = 0
BrowserLoop()
MyDoc= WB.Document.DomDocument
DocElements = MyDoc.getElementsByTagName("input")
For Each j As mshtml.IHTMLElement In DocElements
frmServiceCalls.txtOut.Text &= j.name & " : " & j.value & " : " & j.type & vbCrLf
Next j
frmServiceCalls.txtOut.Text &= vbCrLf & vbCrLf & WB.Url.AbsoluteUri & vbCrLf
Return 1
End Function
This is a vectorized version of unutbu's answer:
def find_nearest(array, values):
array = np.asarray(array)
# the last dim must be 1 to broadcast in (array - values) below.
values = np.expand_dims(values, axis=-1)
indices = np.abs(array - values).argmin(axis=-1)
return array[indices]
image = plt.imread('example_3_band_image.jpg')
print(image.shape) # should be (nrows, ncols, 3)
quantiles = np.linspace(0, 255, num=2 ** 2, dtype=np.uint8)
quantiled_image = find_nearest(quantiles, image)
print(quantiled_image.shape) # should be (nrows, ncols, 3)
To remove the special characters, try
var name = name.replace(/[!@#$%^&*]/g, "");
This is how I handle this, without using exceptions.
import sys
if sys.version_info.major > 2: # Python 3 or later
from urllib.parse import quote
else: # Python 2
from urllib import quote
According to http://msdn.microsoft.com/en-us/library/cc288472(VS.85).aspx#html, IE8 will have "strong" HTML 5 support. I haven't seen anything discussing exactly what "strong support" entails, but I can say that yes, some HTML5 stuff is going to make it into IE8.
Angular is based on observable instead of promise base as of angularjs 1.x, so when we try to get data using http
it returns observable instead of promise, like you did
return this.http
.get(this.configEndPoint)
.map(res => res.json());
then to get data and show on view we have to convert it into desired form using RxJs functions like .map() function and .subscribe()
.map() is used to convert the observable (received from http request)to any form like .json(), .text()
as stated in Angular's official website,
.subscribe() is used to subscribe those observable response and ton put into some variable so from which we display it into the view
this.myService.getConfig().subscribe(res => {
console.log(res);
this.data = res;
});
the other answers should work. however I wanna mention
from java doc:
FileWriter is meant for writing streams of characters. For writing streams of raw bytes, consider using a FileOutputStream.
reading your method codes, you are about to write String to the file, what you were doing is convert String to raw bytes, then write so I think using FileWriter is not a bad idea.
And for the newline problem, Writer has method .write(String), which is convenient to use.
You would simply need to do this in your onClick
:
AlertDialog alertDialog = new AlertDialog.Builder(MainActivity.this).create();
alertDialog.setTitle("Alert");
alertDialog.setMessage("Alert message to be shown");
alertDialog.setButton(AlertDialog.BUTTON_NEUTRAL, "OK",
new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
dialog.dismiss();
}
});
alertDialog.show();
I don't know from where you saw that you need DialogFragment for simply showing an alert.
Hope this helps.
One thing to watch out for though, particularly if you are using a framework, is to check if the application is regenerating session ids between requests - anything that depends explicitly on the session id will run into problems, although obviously the rest of the data in the session will unaffected.
If the application is regenerating session ids like this then you can end up with a situation where an ajax request in effect invalidates / replaces the session id in the requesting page.
Within the dplyr world, try:
select(iris,contains("Sepal"))
See the Selection section in ?select
for numerous other helpers like starts_with
, ends_with
, etc.
CREATE TEMPORARY TABLE IF NOT EXISTS to_table_name AS (SELECT * FROM from_table_name)
I got the same error when the script file I was including container some special characters and when I was running in local moode (directly from local disk). I my case the solution was to explicitly tell the encoding:
<script src="my.js" charset="UTF-8"></script>
Go to Task Manager and check below services are running or not (if not start the services):
OracleXETNSListener
OracleXEClrAgent
OracleServiceXE
For only to display, property of String
can be used as following..
double value = 123.456789;
String.Format("{0:0.00}", value);
Using System.Math.Round
. This value can be assigned to others or manipulated as required..
double value = 123.456789;
System.Math.Round(value, 2);
<p>
tags have built in padding and margin. You could create a CSS selector combined with some javascript for instances when your <p>
is empty. Probably overkill, but it should do what you need it to do.
CSS example: .NoPaddingOrMargin {padding: 0px; margin:0px}
document.getElementById('test').onclick = function () {_x000D_
document.getElementById('scripted').focus();_x000D_
};
_x000D_
div:focus {_x000D_
background-color: Aqua;_x000D_
}
_x000D_
<div>Element X (not focusable)</div>_x000D_
<div tabindex="0">Element Y (user or script focusable)</div>_x000D_
<div tabindex="-1" id="scripted">Element Z (script-only focusable)</div>_x000D_
<div id="test">Set Focus To Element Z</div>
_x000D_
If you want to open any website first you need to import a module called "webbrowser". Then just use webbrowser.open() to open a website. e.g.
import webbrowser
webbrowser.open('https://yashprogrammer.wordpress.com/', new= 2)
The code says everything:
max@serv$ chmod 777 .
Okay, it doesn't say everything.
In UNIX and Linux, the ability to remove a file is not determined by the access bits of that file. It is determined by the access bits of the directory which contains the file.
Think of it this way -- deleting a file doesn't modify that file. You aren't writing to the file, so why should "w" on the file matter? Deleting a file requires editing the directory that points to the file, so you need "w" on the that directory.
Do this
Build -> Rebuild Project
delete your local branch by following command
git branch -d branch_name
you could also do
git branch -D branch_name
which basically force a delete (even if local not merged to source)
Filter: - A filter as the name suggests is a Java class executed by the servlet container for each incoming HTTP request and for each http response. This way, is possible to manage HTTP incoming requests before them reach the resource, such as a JSP page, a servlet or a simple static page; in the same way is possible to manage HTTP outbound response after resource execution.
Interceptor: - Spring Interceptors are similar to Servlet Filters but they acts in Spring Context so are many powerful to manage HTTP Request and Response but they can implement more sophisticated behavior because can access to all Spring context.
Instead of using $http.get('abc/xyz/getSomething') try to use $http.jsonp('abc/xyz/getSomething')
return{
getList:function(){
return $http.jsonp('http://localhost:8080/getNames');
}
}
You can use this function to get the absolute value:
+(NSNumber *)absoluteValue:(NSNumber *)input {
return [NSNumber numberWithDouble:fabs([input doubleValue])];
}
Change the permissions of the directory you want to save to so that all users have read and write permissions.
How about:
ECMA-48 - Control Functions for Coded Character Sets, 5th edition (June 1991) - A standard defining the color control codes, that is apparently supported also by xterm.
SGR 38 and 48 were originally reserved by ECMA-48, but were fleshed out a few years later in a joint ITU, IEC, and ISO standard, which comes in several parts and which (amongst a whole lot of other things) documents the SGR 38/48 control sequences for direct colour and indexed colour:
There's a column for xterm in this table on the Wikipedia page for ANSI escape codes
ArrayBlockingQueue has lower memory footprint, it can reuse element node, not like LinkedBlockingQueue that have to create a LinkedBlockingQueue$Node object for each new insertion.
GNU parallel
and xargs
These two tools that can make scripts simpler, and also control the maximum number of threads (thread pool). E.g.:
seq 10 | xargs -P4 -I'{}' echo '{}'
or:
seq 10 | parallel -j4 echo '{}'
See also: how to write a process-pool bash shell
I have made this program and here is the bit of code that actually calculates the check digit in my program
#Get the 10 digit number
number=input("Please enter ISBN number: ")
#Explained below
no11 = (((int(number[0])*11) + (int(number[1])*10) + (int(number[2])*9) + (int(number[3])*8)
+ (int(number[4])*7) + (int(number[5])*6) + (int(number[6])*5) + (int(number[7])*4) +
(int(number[8])*3) + (int(number[9])*2))/11)
#Round to 1 dp
no11 = round(no11, 1)
#explained below
no11 = str(no11).split(".")
#get the remainder and check digit
remainder = no11[1]
no11 = (11 - int(remainder))
#Calculate 11 digit ISBN
print("Correct ISBN number is " + number + str(no11))
Its a long line of code, but it splits the number up, multiplies the digits by the appropriate amount, adds them together and divides them by 11, in one line of code. The .split() function just creates a list (being split at the decimal) so you can take the 2nd item in the list and take that from 11 to find the check digit. This could also be made even more efficient by changing these two lines:
remainder = no11[1]
no11 = (11 - int(remainder))
To this:
no11 = (11 - int(no11[1]))
Hope this helps :)
On my mac version of VS Code, I select the section, then the shortcut is Ctrl+j to remove line breaks.
I think the "X" argument of regr.fit needs to be a matrix, so the following should work.
regr = LinearRegression()
regr.fit(df2.iloc[1:1000, [5]].values, df2.iloc[1:1000, 2].values)
This is more described code of @Narendra's code,
First you need an instance of the 2nd fragment. Then you should have objects of FragmentManager and FragmentTransaction. The complete code is as below,
Fragment2 fragment2=new Fragment2();
FragmentManager fragmentManager=getActivity().getFragmentManager();
FragmentTransaction fragmentTransaction=fragmentManager.beginTransaction();
fragmentTransaction.replace(R.id.content_main,fragment2,"tag");
fragmentTransaction.addToBackStack(null);
fragmentTransaction.commit();
Hope this will work. In case you use androidx, you need getSupportFragmentManager() instead of getFragmentManager().
Actually None
is much better for "magic" values:
class Cheese():
def __init__(self, num_holes = None):
if num_holes is None:
...
Now if you want complete freedom of adding more parameters:
class Cheese():
def __init__(self, *args, **kwargs):
#args -- tuple of anonymous arguments
#kwargs -- dictionary of named arguments
self.num_holes = kwargs.get('num_holes',random_holes())
To better explain the concept of *args
and **kwargs
(you can actually change these names):
def f(*args, **kwargs):
print 'args: ', args, ' kwargs: ', kwargs
>>> f('a')
args: ('a',) kwargs: {}
>>> f(ar='a')
args: () kwargs: {'ar': 'a'}
>>> f(1,2,param=3)
args: (1, 2) kwargs: {'param': 3}
The problem is not the range, the problem is how the end of slice is calculated.
with a fixed number 10
the simple for
loop is ok but with a calculated size
like bfl.Size()
you get a function-call on every iteration. A simple range
over int32
would help because this evaluate the bfl.Size()
only once.
type BFLT PerfServer
func (this *BFLT) Call() {
bfl := MqBufferLCreateTLS(0)
for this.ReadItemExists() {
bfl.AppendU(this.ReadU())
}
this.SendSTART()
// size := bfl.Size()
for i := int32(0); i < bfl.Size() /* size */; i++ {
this.SendU(bfl.IndexGet(i))
}
this.SendRETURN()
}
For me, the Android SDK didn't seem to be able to find the styles definition. Everything was wired correctly and doing a simple project clean fixed it for me.
@Override
protected void onPostExecute(final Boolean success) {
mProgressDialog.dismiss();
mProgressDialog = null;
setting the value null works for me